Due to the exponential growth of video data,aided by rapid advancements in multimedia technologies.It became difficult for the user to obtain information from a large video series.The process of providing an abstract ...Due to the exponential growth of video data,aided by rapid advancements in multimedia technologies.It became difficult for the user to obtain information from a large video series.The process of providing an abstract of the entire video that includes the most representative frames is known as static video summarization.This method resulted in rapid exploration,indexing,and retrieval of massive video libraries.We propose a framework for static video summary based on a Binary Robust Invariant Scalable Keypoint(BRISK)and bisecting K-means clustering algorithm.The current method effectively recognizes relevant frames using BRISK by extracting keypoints and the descriptors from video sequences.The video frames’BRISK features are clustered using a bisecting K-means,and the keyframe is determined by selecting the frame that is most near the cluster center.Without applying any clustering parameters,the appropriate clusters number is determined using the silhouette coefficient.Experiments were carried out on a publicly available open video project(OVP)dataset that contained videos of different genres.The proposed method’s effectiveness is compared to existing methods using a variety of evaluation metrics,and the proposed method achieves a trade-off between computational cost and quality.展开更多
Nowadays,the COVID-19 virus disease is spreading rampantly.There are some testing tools and kits available for diagnosing the virus,but it is in a lim-ited count.To diagnose the presence of disease from radiological i...Nowadays,the COVID-19 virus disease is spreading rampantly.There are some testing tools and kits available for diagnosing the virus,but it is in a lim-ited count.To diagnose the presence of disease from radiological images,auto-mated COVID-19 diagnosis techniques are needed.The enhancement of AI(Artificial Intelligence)has been focused in previous research,which uses X-ray images for detecting COVID-19.The most common symptoms of COVID-19 are fever,dry cough and sore throat.These symptoms may lead to an increase in the rigorous type of pneumonia with a severe barrier.Since medical imaging is not suggested recently in Canada for critical COVID-19 diagnosis,computer-aided systems are implemented for the early identification of COVID-19,which aids in noticing the disease progression and thus decreases the death rate.Here,a deep learning-based automated method for the extraction of features and classi-fication is enhanced for the detection of COVID-19 from the images of computer tomography(CT).The suggested method functions on the basis of three main pro-cesses:data preprocessing,the extraction of features and classification.This approach integrates the union of deep features with the help of Inception 14 and VGG-16 models.At last,a classifier of Multi-scale Improved ResNet(MSI-ResNet)is developed to detect and classify the CT images into unique labels of class.With the support of available open-source COVID-CT datasets that consists of 760 CT pictures,the investigational validation of the suggested method is estimated.The experimental results reveal that the proposed approach offers greater performance with high specificity,accuracy and sensitivity.展开更多
Component-based software engineering is concerned with the develop-ment of software that can satisfy the customer prerequisites through reuse or inde-pendent development.Coupling and cohesion measurements are primaril...Component-based software engineering is concerned with the develop-ment of software that can satisfy the customer prerequisites through reuse or inde-pendent development.Coupling and cohesion measurements are primarily used to analyse the better software design quality,increase the reliability and reduced system software complexity.The complexity measurement of cohesion and coupling component to analyze the relationship between the component module.In this paper,proposed the component selection framework of Hexa-oval optimization algorithm for selecting the suitable components from the repository.It measures the interface density modules of coupling and cohesion in a modular software sys-tem.This cohesion measurement has been taken into two parameters for analyz-ing the result of complexity,with the help of low cohesion and high cohesion.In coupling measures between the component of inside parameters and outside parameters.Thefinal process of coupling and cohesion,the measured values were used for the average calculation of components parameter.This paper measures the complexity of direct and indirect interaction among the component as well as the proposed algorithm selecting the optimal component for the repository.The better result is observed for high cohesion and low coupling in compo-nent-based software engineering.展开更多
Many search-based algorithms have been successfully applied in sev-eral software engineering activities.Genetic algorithms(GAs)are the most used in the scientific domains by scholars to solve software testing problems....Many search-based algorithms have been successfully applied in sev-eral software engineering activities.Genetic algorithms(GAs)are the most used in the scientific domains by scholars to solve software testing problems.They imi-tate the theory of natural selection and evolution.The harmony search algorithm(HSA)is one of the most recent search algorithms in the last years.It imitates the behavior of a musician tofind the best harmony.Scholars have estimated the simi-larities and the differences between genetic algorithms and the harmony search algorithm in diverse research domains.The test data generation process represents a critical task in software validation.Unfortunately,there is no work comparing the performance of genetic algorithms and the harmony search algorithm in the test data generation process.This paper studies the similarities and the differences between genetic algorithms and the harmony search algorithm based on the ability and speed offinding the required test data.The current research performs an empirical comparison of the HSA and the GAs,and then the significance of the results is estimated using the t-Test.The study investigates the efficiency of the harmony search algorithm and the genetic algorithms according to(1)the time performance,(2)the significance of the generated test data,and(3)the adequacy of the generated test data to satisfy a given testing criterion.The results showed that the harmony search algorithm is significantly faster than the genetic algo-rithms because the t-Test showed that the p-value of the time values is 0.026<α(αis the significance level=0.05 at 95%confidence level).In contrast,there is no significant difference between the two algorithms in generating the adequate test data because the t-Test showed that the p-value of thefitness values is 0.25>α.展开更多
COVID-19,being the virus of fear and anxiety,is one of the most recent and emergent of various respiratory disorders.It is similar to the MERS-COV and SARS-COV,the viruses that affected a large population of different...COVID-19,being the virus of fear and anxiety,is one of the most recent and emergent of various respiratory disorders.It is similar to the MERS-COV and SARS-COV,the viruses that affected a large population of different countries in the year 2012 and 2002,respectively.Various standard models have been used for COVID-19 epidemic prediction but they suffered from low accuracy due to lesser data availability and a high level of uncertainty.The proposed approach used a machine learning-based time-series Facebook NeuralProphet model for prediction of the number of death as well as confirmed cases and compared it with Poisson Distribution,and Random Forest Model.The analysis upon dataset has been performed considering the time duration from January 1st 2020 to16th July 2021.The model has been developed to obtain the forecast values till September 2021.This study aimed to determine the pandemic prediction of COVID-19 in the second wave of coronavirus in India using the latest Time-Series model to observe and predict the coronavirus pandemic situation across the country.In India,the cases are rapidly increasing day-by-day since mid of Feb 2021.The prediction of death rate using the proposed model has a good ability to forecast the COVID-19 dataset essentially in the second wave.To empower the prediction for future validation,the proposed model works effectively.展开更多
In vehicular ad hoc networks(VANETs),the topology information(TI)is updated frequently due to vehicle mobility.These frequent changes in topology increase the topology maintenance overhead.To reduce the control messag...In vehicular ad hoc networks(VANETs),the topology information(TI)is updated frequently due to vehicle mobility.These frequent changes in topology increase the topology maintenance overhead.To reduce the control message overhead,cluster-based routing schemes are proposed.In clusterbased routing schemes,the nodes are divided into different virtual groups,and each group(logical node)is considered a cluster.The topology changes are accommodated within each cluster,and broadcasting TI to the whole VANET is not required.The cluster head(CH)is responsible for managing the communication of a node with other nodes outside the cluster.However,transmitting real-time data via a CH may cause delays in VANETs.Such real-time data require quick service and should be routed through the shortest path when the quality of service(QoS)is required.This paper proposes a hybrid scheme which transmits time-critical data through the QoS shortest path and normal data through CHs.In this way,the real-time data are delivered efciently to the destination on time.Similarly,the routine data are transmitted through CHs to reduce the topology maintenance overhead.The work is validated through a series of simulations,and results show that the proposed scheme outperforms existing algorithms in terms of topology maintenance overhead,QoS and real-time and routine packet transmission.展开更多
In a digital world moving at a breakneck speed,consultancy services have emerged as one of the prominent resources for seeking effective,sustainable and economically viable solutions to a given crisis.The present day ...In a digital world moving at a breakneck speed,consultancy services have emerged as one of the prominent resources for seeking effective,sustainable and economically viable solutions to a given crisis.The present day consultancy services are aided by the use of multiple tools and techniques.However,ensuring the security of these tools and techniques is an important concern for the consultants because even a slight malfunction of any tool could alter the results drastically.Consultants usually tackle these functions after establishing the clients’needs and developing the appropriate strategy.Nevertheless,most of the consultants tend to focus more on the intended outcomes only and often ignore the security-specific issues.Our research study is an initiative to recommend the use of a hybrid computational technique based on fuzzy Analytical Hierarchy Process(AHP)and fuzzy Technique for Order Preference by Similarity to Ideal Solutions(TOPSIS)for prioritizing the tools and techniques that are used in consultancy services on the basis of their security features and efficacy.The empirical analysis conducted in this context shows that after implementing the assessment process,the rank of the tools and techniques obtained is:A7>A1>A4>A2>A3>A5>A6>A7,and General Electric McKinsey(GE-McKinsey)Nine-box Matrix(A7)obtained the highest rank.Thus,the outcomes show that this order of selection of the tools and techniques will give the most effective and secure services.The awareness about using the best tools and techniques in consultancy services is as important as selecting the most secure tool for solving a given problem.In this league,the results obtained in this study would be a conclusive and a reliable reference for the consultants.展开更多
Long wavelength GaSb-based quantum well lasers have been optimized for high coupling efficiency into an optical system. Two approaches were used to reduce the vertical far-field. In the first approach we showed the us...Long wavelength GaSb-based quantum well lasers have been optimized for high coupling efficiency into an optical system. Two approaches were used to reduce the vertical far-field. In the first approach we showed the use of V-shaped Weaker Waveguide in the n-cladding layer dramatically reduces vertical beam divergence without any performance degradation compared to a conventional broad-waveguide laser structure. Starting from a broad waveguide laser structure design which gives low threshold current and a large vertical far-field (VFF), the structure was modified to decrease the VFF while maintaining a low threshold-current density. In a first step the combination of a narrow optical waveguide and reduced refractive index step between the waveguide and the cladding layers reduce the VFF from 67? to 42?. The threshold current density was kept low to a value of ~190 A/cm2 for 1000 × 100 μm2 devices by careful adjustment of the doping profile in the p-type cladding layer. The insertion of a V-Shaped Weaker Waveguide in the n-cladding layer is shown to allow for further reduction of the VFF to a value as low as 35? for better light-coupling efficiency into an optical system without any degradation of the device performance. In the second approach, we showed that the use of a depressed cladding structure design also allows for the reduction of the VFF while maintaining low the threshold current density (210 A/cm2), slightly higher value compare to the first design.展开更多
Datasets with the imbalanced class distribution are difficult to handle with the standard classification algorithms.In supervised learning,dealing with the problem of class imbalance is still considered to be a challe...Datasets with the imbalanced class distribution are difficult to handle with the standard classification algorithms.In supervised learning,dealing with the problem of class imbalance is still considered to be a challenging research problem.Various machine learning techniques are designed to operate on balanced datasets;therefore,the state of the art,different undersampling,over-sampling and hybrid strategies have been proposed to deal with the problem of imbalanced datasets,but highly skewed datasets still pose the problem of generalization and noise generation during resampling.To overcome these problems,this paper proposes amajority clusteringmodel for classification of imbalanced datasets known as MCBC-SMOTE(Majority Clustering for balanced Classification-SMOTE).The model provides a method to convert the problem of binary classification into a multi-class problem.In the proposed algorithm,the number of clusters for themajority class is calculated using the elbow method and the minority class is over-sampled as an average of clustered majority classes to generate a symmetrical class distribution.The proposed technique is cost-effective,reduces the problem of noise generation and successfully disables the imbalances present in between and within classes.The results of the evaluations on diverse real datasets proved to provide better classification results as compared to state of the art existing methodologies based on several performance metrics.展开更多
This research work proposes a new stack-based generalization ensemble model to forecast the number of incidences of conjunctivitis disease.In addition to forecasting the occurrences of conjunctivitis incidences,the pr...This research work proposes a new stack-based generalization ensemble model to forecast the number of incidences of conjunctivitis disease.In addition to forecasting the occurrences of conjunctivitis incidences,the proposed model also improves performance by using the ensemble model.Weekly rate of acute Conjunctivitis per 1000 for Hong Kong is collected for the duration of the first week of January 2010 to the last week of December 2019.Pre-processing techniques such as imputation of missing values and logarithmic transformation are applied to pre-process the data sets.A stacked generalization ensemble model based on Auto-ARIMA(Autoregressive Integrated Moving Average),NNAR(Neural Network Autoregression),ETS(Exponential Smoothing),HW(Holt Winter)is proposed and applied on the dataset.Predictive analysis is conducted on the collected dataset of conjunctivitis disease,and further compared for different performance measures.The result shows that the RMSE(Root Mean Square Error),MAE(Mean Absolute Error),MAPE(Mean Absolute Percentage Error),ACF1(Auto Correlation Function)of the proposed ensemble is decreased significantly.Considering the RMSE,for instance,error values are reduced by 39.23%,9.13%,20.42%,and 17.13%in comparison to Auto-ARIMA,NAR,ETS,and HW model respectively.This research concludes that the accuracy of the forecasting of diseases can be significantly increased by applying the proposed stack generalization ensemble model as it minimizes the prediction error and hence provides better prediction trends as compared to Auto-ARIMA,NAR,ETS,and HW model applied discretely.展开更多
Internet of Everything(IoE)indicates a fantastic vision of the future,where everything is connected to the internet,providing intelligent services and facilitating decision making.IoE is the collection of static and m...Internet of Everything(IoE)indicates a fantastic vision of the future,where everything is connected to the internet,providing intelligent services and facilitating decision making.IoE is the collection of static and moving objects able to coordinate and communicate with each other.The moving objects may consist of ground segments and ying segments.The speed of ying segment e.g.,Unmanned Ariel Vehicles(UAVs)may high as compared to ground segment objects.The topology changes occur very frequently due to high speed nature of objects in UAV-enabled IoE(Ue-IoE).The routing maintenance overhead may increase when scaling the Ue-IoE(number of objects increases).A single change in topology can force all the objects of the Ue-IoE to update their routing tables.Similarly,the frequent updating in routing table entries will result more energy dissipation and the lifetime of the Ue-IoE may decrease.The objects consume more energy on routing computations.To prevent the frequent updation of routing tables associated with each object,the computation of routes from source to destination may be limited to optimum number of objects in the Ue-IoE.In this article,we propose a routing scheme in which the responsibility of route computation(from neighbor objects to destination)is assigned to some IoE-objects in the Ue-IoE.The route computation objects(RCO)are selected on the basis of certain parameters like remaining energy and mobility.The RCO send the routing information of destination objects to their neighbors once they want to communicate with other objects.The proposed protocol is simulated and the results show that it outperform state-of-the-art protocols in terms of average energy consumption,messages overhead,throughput,delay etc.展开更多
Cricket databases contain rich and useful information to examine and forecasting patterns and trends.This paper predicts Star Cricketers(SCs)from batting and bowling domains by employing supervised machine learning mo...Cricket databases contain rich and useful information to examine and forecasting patterns and trends.This paper predicts Star Cricketers(SCs)from batting and bowling domains by employing supervised machine learning models.With this aim,each player’s performance evolution is retrieved by using effective features that incorporate the standard performance measures of each player and their peers.Prediction is performed by applying Bayesianrule,function and decision-tree-based models.Experimental evaluations are performed to validate the applicability of the proposed approach.In particular,the impact of the individual features on the prediction of SCs are analyzed.Moreover,the category and model-wise feature evaluations are also conducted.A cross-validation mechanism is applied to validate the performance of our proposed approach which further confirms that the incorporated features are statistically significant.Finally,leading SCs are extracted based on their performance evolution scores and their standings are cross-checked with those provided by the International Cricket Council.展开更多
Wireless sensor networks(WSNs)is one of the renowned ad hoc network technology that has vast varieties of applications such as in computer networks,bio-medical engineering,agriculture,industry and many more.It has bee...Wireless sensor networks(WSNs)is one of the renowned ad hoc network technology that has vast varieties of applications such as in computer networks,bio-medical engineering,agriculture,industry and many more.It has been used in the internet-of-things(IoTs)applications.A method for data collecting utilizing hybrid compressive sensing(CS)is developed in order to reduce the quantity of data transmission in the clustered sensor network and balance the network load.Candidate cluster head nodes are chosen first from each temporary cluster that is closest to the cluster centroid of the nodes,and then the cluster heads are selected in order based on the distance between the determined cluster head node and the undetermined candidate cluster head node.Then,each ordinary node joins the cluster that is nearest to it.The greedy CS is used to compress data transmission for nodes whose data transmission volume is greater than the threshold in a data transmission tree with the Sink node as the root node and linking all cluster head nodes.The simulation results demonstrate that when the compression ratio is set to ten,the data transfer volume is reduced by a factor of ten.When compared to clustering and SPT without CS,it is reduced by 75%and 65%,respectively.When compared to SPT with Hybrid CS and Clustering with hybrid CS,it is reduced by 35%and 20%,respectively.Clustering and SPT without CS are compared in terms of node data transfer volume standard deviation.SPT with Hybrid CS and clustering with Hybrid CS were both reduced by 62%and 80%,respectively.When compared to SPT with hybrid CS and clustering with hybrid CS,the latter two were reduced by 41%and 19%,respectively.展开更多
The Tor dark web network has been reported to provide a breeding ground for criminals and fraudsters who are exploiting the vulnerabilities in the network to carry out illicit and unethical activities.The network has ...The Tor dark web network has been reported to provide a breeding ground for criminals and fraudsters who are exploiting the vulnerabilities in the network to carry out illicit and unethical activities.The network has unfortunately become a means to perpetuate crimes like illegal drugs and firearm trafficking,violence and terrorist activities among others.The government and law enforcement agencies are working relentlessly to control the misuse of Tor network.This is a study in the similar league,with an attempt to suggest a link-based ranking technique to rank and identify the influential hidden services in the Tor dark web.The proposed method considers the extent of connectivity to the surface web services and values of the centrality metrics of a hidden service in the web graph for ranking.The modified PageRank algorithm is used to obtain the overall rankings of the hidden services in the dataset.Several graph metrics were used to evaluate the effectiveness of the proposed technique with other commonly known ranking procedures in literature.The proposed ranking technique is shown to produce good results in identifying the influential domains in the tor network.展开更多
COVID-19 disease is spreading exponentially due to the rapid transmission of the virus between humans.Different countries have tried different solutions to control the spread of the disease,including lockdowns of coun...COVID-19 disease is spreading exponentially due to the rapid transmission of the virus between humans.Different countries have tried different solutions to control the spread of the disease,including lockdowns of countries or cities,quarantines,isolation,sanitization,and masks.Patients with symptoms of COVID-19 are tested using medical testing kits;these tests must be conducted by healthcare professionals.However,the testing process is expensive and time-consuming.There is no surveillance system that can be used as surveillance framework to identify regions of infected individuals and determine the rate of spread so that precautions can be taken.This paper introduces a novel technique based on deep learning(DL)that can be used as a surveillance system to identify infected individuals by analyzing tweets related to COVID-19.The system is used only for surveillance purposes to identify regions where the spread of COVID-19 is high;clinical tests should then be used to test and identify infected individuals.The system proposed here uses recurrent neural networks(RNN)and word-embedding techniques to analyze tweets and determine whether a tweet provides information about COVID-19 or refers to individuals who have been infected with the virus.The results demonstrate that RNN can conduct this analysis more accurately than other machine learning(ML)algorithms.展开更多
The ubiquitous nature of the internet has made it easier for criminals to carry out illegal activities online.The sale of illegal firearms and weaponry on dark web cryptomarkets is one such example of it.To aid the la...The ubiquitous nature of the internet has made it easier for criminals to carry out illegal activities online.The sale of illegal firearms and weaponry on dark web cryptomarkets is one such example of it.To aid the law enforcement agencies in curbing the illicit trade of firearms on cryptomarkets,this paper has proposed an automated technique employing ensemble machine learning models to detect the firearms listings on cryptomarkets.In this work,we have used partof-speech(PoS)tagged features in conjunction with n-gram models to construct the feature set for the ensemble model.We studied the effectiveness of the proposed features in the performance of the classification model and the relative change in the dimensionality of the feature set.The experiments and evaluations are performed on the data belonging to the three popular cryptomarkets on the Tor dark web from a publicly available dataset.The prediction of the classification model can be utilized to identify the key vendors in the ecosystem of the illegal trade of firearms.This information can then be used by law enforcement agencies to bust firearm trafficking on the dark web.展开更多
Smartphone devices particularly Android devices are in use by billions of people everywhere in the world.Similarly,this increasing rate attracts mobile botnet attacks which is a network of interconnected nodes operate...Smartphone devices particularly Android devices are in use by billions of people everywhere in the world.Similarly,this increasing rate attracts mobile botnet attacks which is a network of interconnected nodes operated through the command and control(C&C)method to expand malicious activities.At present,mobile botnet attacks launched the Distributed denial of services(DDoS)that causes to steal of sensitive data,remote access,and spam generation,etc.Consequently,various approaches are defined in the literature to detect mobile botnet attacks using static or dynamic analysis.In this paper,a novel hybrid model,the combination of static and dynamic methods that relies on machine learning to detect android botnet applications is proposed.Furthermore,results are evaluated using machine learning classifiers.The Random Forest(RF)classifier outperform as compared to other ML techniques i.e.,Naïve Bayes(NB),Support Vector Machine(SVM),and Simple Logistic(SL).Our proposed framework achieved 97.48%accuracy in the detection of botnet applications.Finally,some future research directions are highlighted regarding botnet attacks detection for the entire community.展开更多
Since the beginning of web applications,security has been a critical study area.There has been a lot of research done to figure out how to define and identify security goals or issues.However,high-security web apps ha...Since the beginning of web applications,security has been a critical study area.There has been a lot of research done to figure out how to define and identify security goals or issues.However,high-security web apps have been found to be less durable in recent years;thus reducing their business continuity.High security features of a web application are worthless unless they provide effective services to the user and meet the standards of commercial viability.Hence,there is a necessity to link in the gap between durability and security of the web application.Indeed,security mechanisms must be used to enhance durability as well as the security of the web application.Although durability and security are not related directly,some of their factors influence each other indirectly.Characteristics play an important role in reducing the void between durability and security.In this respect,the present study identifies key characteristics of security and durability that affect each other indirectly and directly,including confidentiality,integrity availability,human trust and trustworthiness.The importance of all the attributes in terms of their weight is essential for their influence on the whole security during the development procedure of web application.To estimate the efficacy of present study,authors employed the Hesitant Fuzzy Analytic Hierarchy Process(H-Fuzzy AHP).The outcomes of our investigations and conclusions will be a useful reference for the web application developers in achieving a more secure and durable web application.展开更多
基金The authors would like to thank Research Supporting Project Number(RSP2024R444)King Saud University,Riyadh,Saudi Arabia.
文摘Due to the exponential growth of video data,aided by rapid advancements in multimedia technologies.It became difficult for the user to obtain information from a large video series.The process of providing an abstract of the entire video that includes the most representative frames is known as static video summarization.This method resulted in rapid exploration,indexing,and retrieval of massive video libraries.We propose a framework for static video summary based on a Binary Robust Invariant Scalable Keypoint(BRISK)and bisecting K-means clustering algorithm.The current method effectively recognizes relevant frames using BRISK by extracting keypoints and the descriptors from video sequences.The video frames’BRISK features are clustered using a bisecting K-means,and the keyframe is determined by selecting the frame that is most near the cluster center.Without applying any clustering parameters,the appropriate clusters number is determined using the silhouette coefficient.Experiments were carried out on a publicly available open video project(OVP)dataset that contained videos of different genres.The proposed method’s effectiveness is compared to existing methods using a variety of evaluation metrics,and the proposed method achieves a trade-off between computational cost and quality.
基金Supporting this research through Taif University Researchers Supporting Project number(TURSP-2020/231),Taif University,Taif,Saudi Arabia.
文摘Nowadays,the COVID-19 virus disease is spreading rampantly.There are some testing tools and kits available for diagnosing the virus,but it is in a lim-ited count.To diagnose the presence of disease from radiological images,auto-mated COVID-19 diagnosis techniques are needed.The enhancement of AI(Artificial Intelligence)has been focused in previous research,which uses X-ray images for detecting COVID-19.The most common symptoms of COVID-19 are fever,dry cough and sore throat.These symptoms may lead to an increase in the rigorous type of pneumonia with a severe barrier.Since medical imaging is not suggested recently in Canada for critical COVID-19 diagnosis,computer-aided systems are implemented for the early identification of COVID-19,which aids in noticing the disease progression and thus decreases the death rate.Here,a deep learning-based automated method for the extraction of features and classi-fication is enhanced for the detection of COVID-19 from the images of computer tomography(CT).The suggested method functions on the basis of three main pro-cesses:data preprocessing,the extraction of features and classification.This approach integrates the union of deep features with the help of Inception 14 and VGG-16 models.At last,a classifier of Multi-scale Improved ResNet(MSI-ResNet)is developed to detect and classify the CT images into unique labels of class.With the support of available open-source COVID-CT datasets that consists of 760 CT pictures,the investigational validation of the suggested method is estimated.The experimental results reveal that the proposed approach offers greater performance with high specificity,accuracy and sensitivity.
基金We deeply acknowledge Taif University for Supporting this research through Taif University Researchers Supporting Project number(TURSP-2020/231),Taif University,Taif,Saudi Arabia.
文摘Component-based software engineering is concerned with the develop-ment of software that can satisfy the customer prerequisites through reuse or inde-pendent development.Coupling and cohesion measurements are primarily used to analyse the better software design quality,increase the reliability and reduced system software complexity.The complexity measurement of cohesion and coupling component to analyze the relationship between the component module.In this paper,proposed the component selection framework of Hexa-oval optimization algorithm for selecting the suitable components from the repository.It measures the interface density modules of coupling and cohesion in a modular software sys-tem.This cohesion measurement has been taken into two parameters for analyz-ing the result of complexity,with the help of low cohesion and high cohesion.In coupling measures between the component of inside parameters and outside parameters.Thefinal process of coupling and cohesion,the measured values were used for the average calculation of components parameter.This paper measures the complexity of direct and indirect interaction among the component as well as the proposed algorithm selecting the optimal component for the repository.The better result is observed for high cohesion and low coupling in compo-nent-based software engineering.
文摘Many search-based algorithms have been successfully applied in sev-eral software engineering activities.Genetic algorithms(GAs)are the most used in the scientific domains by scholars to solve software testing problems.They imi-tate the theory of natural selection and evolution.The harmony search algorithm(HSA)is one of the most recent search algorithms in the last years.It imitates the behavior of a musician tofind the best harmony.Scholars have estimated the simi-larities and the differences between genetic algorithms and the harmony search algorithm in diverse research domains.The test data generation process represents a critical task in software validation.Unfortunately,there is no work comparing the performance of genetic algorithms and the harmony search algorithm in the test data generation process.This paper studies the similarities and the differences between genetic algorithms and the harmony search algorithm based on the ability and speed offinding the required test data.The current research performs an empirical comparison of the HSA and the GAs,and then the significance of the results is estimated using the t-Test.The study investigates the efficiency of the harmony search algorithm and the genetic algorithms according to(1)the time performance,(2)the significance of the generated test data,and(3)the adequacy of the generated test data to satisfy a given testing criterion.The results showed that the harmony search algorithm is significantly faster than the genetic algo-rithms because the t-Test showed that the p-value of the time values is 0.026<α(αis the significance level=0.05 at 95%confidence level).In contrast,there is no significant difference between the two algorithms in generating the adequate test data because the t-Test showed that the p-value of thefitness values is 0.25>α.
基金This work was supported by the Taif University Researchers supporting Project Number(TURSP-2020/254).
文摘COVID-19,being the virus of fear and anxiety,is one of the most recent and emergent of various respiratory disorders.It is similar to the MERS-COV and SARS-COV,the viruses that affected a large population of different countries in the year 2012 and 2002,respectively.Various standard models have been used for COVID-19 epidemic prediction but they suffered from low accuracy due to lesser data availability and a high level of uncertainty.The proposed approach used a machine learning-based time-series Facebook NeuralProphet model for prediction of the number of death as well as confirmed cases and compared it with Poisson Distribution,and Random Forest Model.The analysis upon dataset has been performed considering the time duration from January 1st 2020 to16th July 2021.The model has been developed to obtain the forecast values till September 2021.This study aimed to determine the pandemic prediction of COVID-19 in the second wave of coronavirus in India using the latest Time-Series model to observe and predict the coronavirus pandemic situation across the country.In India,the cases are rapidly increasing day-by-day since mid of Feb 2021.The prediction of death rate using the proposed model has a good ability to forecast the COVID-19 dataset essentially in the second wave.To empower the prediction for future validation,the proposed model works effectively.
基金supported by Taif University Researchers Supporting Project Number(TURSP-2020/231),Taif University,Taif,Saudi Arabia.
文摘In vehicular ad hoc networks(VANETs),the topology information(TI)is updated frequently due to vehicle mobility.These frequent changes in topology increase the topology maintenance overhead.To reduce the control message overhead,cluster-based routing schemes are proposed.In clusterbased routing schemes,the nodes are divided into different virtual groups,and each group(logical node)is considered a cluster.The topology changes are accommodated within each cluster,and broadcasting TI to the whole VANET is not required.The cluster head(CH)is responsible for managing the communication of a node with other nodes outside the cluster.However,transmitting real-time data via a CH may cause delays in VANETs.Such real-time data require quick service and should be routed through the shortest path when the quality of service(QoS)is required.This paper proposes a hybrid scheme which transmits time-critical data through the QoS shortest path and normal data through CHs.In this way,the real-time data are delivered efciently to the destination on time.Similarly,the routine data are transmitted through CHs to reduce the topology maintenance overhead.The work is validated through a series of simulations,and results show that the proposed scheme outperforms existing algorithms in terms of topology maintenance overhead,QoS and real-time and routine packet transmission.
基金Funding for this study was received from the Taif University Researchers Supporting Projects at Taif University,Kingdom of Saudi Arabia under Grant No.TURSP-2020/254.
文摘In a digital world moving at a breakneck speed,consultancy services have emerged as one of the prominent resources for seeking effective,sustainable and economically viable solutions to a given crisis.The present day consultancy services are aided by the use of multiple tools and techniques.However,ensuring the security of these tools and techniques is an important concern for the consultants because even a slight malfunction of any tool could alter the results drastically.Consultants usually tackle these functions after establishing the clients’needs and developing the appropriate strategy.Nevertheless,most of the consultants tend to focus more on the intended outcomes only and often ignore the security-specific issues.Our research study is an initiative to recommend the use of a hybrid computational technique based on fuzzy Analytical Hierarchy Process(AHP)and fuzzy Technique for Order Preference by Similarity to Ideal Solutions(TOPSIS)for prioritizing the tools and techniques that are used in consultancy services on the basis of their security features and efficacy.The empirical analysis conducted in this context shows that after implementing the assessment process,the rank of the tools and techniques obtained is:A7>A1>A4>A2>A3>A5>A6>A7,and General Electric McKinsey(GE-McKinsey)Nine-box Matrix(A7)obtained the highest rank.Thus,the outcomes show that this order of selection of the tools and techniques will give the most effective and secure services.The awareness about using the best tools and techniques in consultancy services is as important as selecting the most secure tool for solving a given problem.In this league,the results obtained in this study would be a conclusive and a reliable reference for the consultants.
文摘Long wavelength GaSb-based quantum well lasers have been optimized for high coupling efficiency into an optical system. Two approaches were used to reduce the vertical far-field. In the first approach we showed the use of V-shaped Weaker Waveguide in the n-cladding layer dramatically reduces vertical beam divergence without any performance degradation compared to a conventional broad-waveguide laser structure. Starting from a broad waveguide laser structure design which gives low threshold current and a large vertical far-field (VFF), the structure was modified to decrease the VFF while maintaining a low threshold-current density. In a first step the combination of a narrow optical waveguide and reduced refractive index step between the waveguide and the cladding layers reduce the VFF from 67? to 42?. The threshold current density was kept low to a value of ~190 A/cm2 for 1000 × 100 μm2 devices by careful adjustment of the doping profile in the p-type cladding layer. The insertion of a V-Shaped Weaker Waveguide in the n-cladding layer is shown to allow for further reduction of the VFF to a value as low as 35? for better light-coupling efficiency into an optical system without any degradation of the device performance. In the second approach, we showed that the use of a depressed cladding structure design also allows for the reduction of the VFF while maintaining low the threshold current density (210 A/cm2), slightly higher value compare to the first design.
基金This research was supported by Taif University Researchers Supporting Project number(TURSP-2020/254),Taif University,Taif,Saudi Arabia.
文摘Datasets with the imbalanced class distribution are difficult to handle with the standard classification algorithms.In supervised learning,dealing with the problem of class imbalance is still considered to be a challenging research problem.Various machine learning techniques are designed to operate on balanced datasets;therefore,the state of the art,different undersampling,over-sampling and hybrid strategies have been proposed to deal with the problem of imbalanced datasets,but highly skewed datasets still pose the problem of generalization and noise generation during resampling.To overcome these problems,this paper proposes amajority clusteringmodel for classification of imbalanced datasets known as MCBC-SMOTE(Majority Clustering for balanced Classification-SMOTE).The model provides a method to convert the problem of binary classification into a multi-class problem.In the proposed algorithm,the number of clusters for themajority class is calculated using the elbow method and the minority class is over-sampled as an average of clustered majority classes to generate a symmetrical class distribution.The proposed technique is cost-effective,reduces the problem of noise generation and successfully disables the imbalances present in between and within classes.The results of the evaluations on diverse real datasets proved to provide better classification results as compared to state of the art existing methodologies based on several performance metrics.
基金The authors would like to express their gratitude to Taif University,Taif,Saudi Arabia for providing administrative and technical support.This work was supported by the Taif University Researchers supporting Project number(TURSP-2020/254).
文摘This research work proposes a new stack-based generalization ensemble model to forecast the number of incidences of conjunctivitis disease.In addition to forecasting the occurrences of conjunctivitis incidences,the proposed model also improves performance by using the ensemble model.Weekly rate of acute Conjunctivitis per 1000 for Hong Kong is collected for the duration of the first week of January 2010 to the last week of December 2019.Pre-processing techniques such as imputation of missing values and logarithmic transformation are applied to pre-process the data sets.A stacked generalization ensemble model based on Auto-ARIMA(Autoregressive Integrated Moving Average),NNAR(Neural Network Autoregression),ETS(Exponential Smoothing),HW(Holt Winter)is proposed and applied on the dataset.Predictive analysis is conducted on the collected dataset of conjunctivitis disease,and further compared for different performance measures.The result shows that the RMSE(Root Mean Square Error),MAE(Mean Absolute Error),MAPE(Mean Absolute Percentage Error),ACF1(Auto Correlation Function)of the proposed ensemble is decreased significantly.Considering the RMSE,for instance,error values are reduced by 39.23%,9.13%,20.42%,and 17.13%in comparison to Auto-ARIMA,NAR,ETS,and HW model respectively.This research concludes that the accuracy of the forecasting of diseases can be significantly increased by applying the proposed stack generalization ensemble model as it minimizes the prediction error and hence provides better prediction trends as compared to Auto-ARIMA,NAR,ETS,and HW model applied discretely.
基金supported by Taif University Researchers Supporting Project number(TURSP-2020/231),Taif University,Taif,Saudi Arabia.
文摘Internet of Everything(IoE)indicates a fantastic vision of the future,where everything is connected to the internet,providing intelligent services and facilitating decision making.IoE is the collection of static and moving objects able to coordinate and communicate with each other.The moving objects may consist of ground segments and ying segments.The speed of ying segment e.g.,Unmanned Ariel Vehicles(UAVs)may high as compared to ground segment objects.The topology changes occur very frequently due to high speed nature of objects in UAV-enabled IoE(Ue-IoE).The routing maintenance overhead may increase when scaling the Ue-IoE(number of objects increases).A single change in topology can force all the objects of the Ue-IoE to update their routing tables.Similarly,the frequent updating in routing table entries will result more energy dissipation and the lifetime of the Ue-IoE may decrease.The objects consume more energy on routing computations.To prevent the frequent updation of routing tables associated with each object,the computation of routes from source to destination may be limited to optimum number of objects in the Ue-IoE.In this article,we propose a routing scheme in which the responsibility of route computation(from neighbor objects to destination)is assigned to some IoE-objects in the Ue-IoE.The route computation objects(RCO)are selected on the basis of certain parameters like remaining energy and mobility.The RCO send the routing information of destination objects to their neighbors once they want to communicate with other objects.The proposed protocol is simulated and the results show that it outperform state-of-the-art protocols in terms of average energy consumption,messages overhead,throughput,delay etc.
基金This work is financially supported by Universiti Tunku Abdul Rahman,Kampar,Perak,MalaysiaThe authors also acknowledge Taif university for financial support for this research through the Taif University researchers supporting project(TURSP-2020/231),Taif University,Saudi Arabia.
文摘Cricket databases contain rich and useful information to examine and forecasting patterns and trends.This paper predicts Star Cricketers(SCs)from batting and bowling domains by employing supervised machine learning models.With this aim,each player’s performance evolution is retrieved by using effective features that incorporate the standard performance measures of each player and their peers.Prediction is performed by applying Bayesianrule,function and decision-tree-based models.Experimental evaluations are performed to validate the applicability of the proposed approach.In particular,the impact of the individual features on the prediction of SCs are analyzed.Moreover,the category and model-wise feature evaluations are also conducted.A cross-validation mechanism is applied to validate the performance of our proposed approach which further confirms that the incorporated features are statistically significant.Finally,leading SCs are extracted based on their performance evolution scores and their standings are cross-checked with those provided by the International Cricket Council.
基金supported by the Researchers Supporting Project(No.RSP-2021/395),King Saud University,Riyadh,Saudi Arabia.
文摘Wireless sensor networks(WSNs)is one of the renowned ad hoc network technology that has vast varieties of applications such as in computer networks,bio-medical engineering,agriculture,industry and many more.It has been used in the internet-of-things(IoTs)applications.A method for data collecting utilizing hybrid compressive sensing(CS)is developed in order to reduce the quantity of data transmission in the clustered sensor network and balance the network load.Candidate cluster head nodes are chosen first from each temporary cluster that is closest to the cluster centroid of the nodes,and then the cluster heads are selected in order based on the distance between the determined cluster head node and the undetermined candidate cluster head node.Then,each ordinary node joins the cluster that is nearest to it.The greedy CS is used to compress data transmission for nodes whose data transmission volume is greater than the threshold in a data transmission tree with the Sink node as the root node and linking all cluster head nodes.The simulation results demonstrate that when the compression ratio is set to ten,the data transfer volume is reduced by a factor of ten.When compared to clustering and SPT without CS,it is reduced by 75%and 65%,respectively.When compared to SPT with Hybrid CS and Clustering with hybrid CS,it is reduced by 35%and 20%,respectively.Clustering and SPT without CS are compared in terms of node data transfer volume standard deviation.SPT with Hybrid CS and clustering with Hybrid CS were both reduced by 62%and 80%,respectively.When compared to SPT with hybrid CS and clustering with hybrid CS,the latter two were reduced by 41%and 19%,respectively.
基金supported by Taif University Researchers Supporting Project Number(TURSP-2020/231),Taif University,Taif,Saudi Arabia.
文摘The Tor dark web network has been reported to provide a breeding ground for criminals and fraudsters who are exploiting the vulnerabilities in the network to carry out illicit and unethical activities.The network has unfortunately become a means to perpetuate crimes like illegal drugs and firearm trafficking,violence and terrorist activities among others.The government and law enforcement agencies are working relentlessly to control the misuse of Tor network.This is a study in the similar league,with an attempt to suggest a link-based ranking technique to rank and identify the influential hidden services in the Tor dark web.The proposed method considers the extent of connectivity to the surface web services and values of the centrality metrics of a hidden service in the web graph for ranking.The modified PageRank algorithm is used to obtain the overall rankings of the hidden services in the dataset.Several graph metrics were used to evaluate the effectiveness of the proposed technique with other commonly known ranking procedures in literature.The proposed ranking technique is shown to produce good results in identifying the influential domains in the tor network.
基金support from Taif university through Researchers Supporting Project number(TURSP-2020/231),Taif University,Taif,Saudi Arabia.
文摘COVID-19 disease is spreading exponentially due to the rapid transmission of the virus between humans.Different countries have tried different solutions to control the spread of the disease,including lockdowns of countries or cities,quarantines,isolation,sanitization,and masks.Patients with symptoms of COVID-19 are tested using medical testing kits;these tests must be conducted by healthcare professionals.However,the testing process is expensive and time-consuming.There is no surveillance system that can be used as surveillance framework to identify regions of infected individuals and determine the rate of spread so that precautions can be taken.This paper introduces a novel technique based on deep learning(DL)that can be used as a surveillance system to identify infected individuals by analyzing tweets related to COVID-19.The system is used only for surveillance purposes to identify regions where the spread of COVID-19 is high;clinical tests should then be used to test and identify infected individuals.The system proposed here uses recurrent neural networks(RNN)and word-embedding techniques to analyze tweets and determine whether a tweet provides information about COVID-19 or refers to individuals who have been infected with the virus.The results demonstrate that RNN can conduct this analysis more accurately than other machine learning(ML)algorithms.
基金Funding for this study is received from the Taif University Research Supporting Projects at Taif University,Kingdom of Saudi Arabia under Grant No.TURSP-2020/254.
文摘The ubiquitous nature of the internet has made it easier for criminals to carry out illegal activities online.The sale of illegal firearms and weaponry on dark web cryptomarkets is one such example of it.To aid the law enforcement agencies in curbing the illicit trade of firearms on cryptomarkets,this paper has proposed an automated technique employing ensemble machine learning models to detect the firearms listings on cryptomarkets.In this work,we have used partof-speech(PoS)tagged features in conjunction with n-gram models to construct the feature set for the ensemble model.We studied the effectiveness of the proposed features in the performance of the classification model and the relative change in the dimensionality of the feature set.The experiments and evaluations are performed on the data belonging to the three popular cryptomarkets on the Tor dark web from a publicly available dataset.The prediction of the classification model can be utilized to identify the key vendors in the ecosystem of the illegal trade of firearms.This information can then be used by law enforcement agencies to bust firearm trafficking on the dark web.
文摘Smartphone devices particularly Android devices are in use by billions of people everywhere in the world.Similarly,this increasing rate attracts mobile botnet attacks which is a network of interconnected nodes operated through the command and control(C&C)method to expand malicious activities.At present,mobile botnet attacks launched the Distributed denial of services(DDoS)that causes to steal of sensitive data,remote access,and spam generation,etc.Consequently,various approaches are defined in the literature to detect mobile botnet attacks using static or dynamic analysis.In this paper,a novel hybrid model,the combination of static and dynamic methods that relies on machine learning to detect android botnet applications is proposed.Furthermore,results are evaluated using machine learning classifiers.The Random Forest(RF)classifier outperform as compared to other ML techniques i.e.,Naïve Bayes(NB),Support Vector Machine(SVM),and Simple Logistic(SL).Our proposed framework achieved 97.48%accuracy in the detection of botnet applications.Finally,some future research directions are highlighted regarding botnet attacks detection for the entire community.
基金funded by the Taif University Researchers Supporting Projects at Taif University,Kingdom of Saudi Arabia,under Grant Number:TURSP-2020/231.
文摘Since the beginning of web applications,security has been a critical study area.There has been a lot of research done to figure out how to define and identify security goals or issues.However,high-security web apps have been found to be less durable in recent years;thus reducing their business continuity.High security features of a web application are worthless unless they provide effective services to the user and meet the standards of commercial viability.Hence,there is a necessity to link in the gap between durability and security of the web application.Indeed,security mechanisms must be used to enhance durability as well as the security of the web application.Although durability and security are not related directly,some of their factors influence each other indirectly.Characteristics play an important role in reducing the void between durability and security.In this respect,the present study identifies key characteristics of security and durability that affect each other indirectly and directly,including confidentiality,integrity availability,human trust and trustworthiness.The importance of all the attributes in terms of their weight is essential for their influence on the whole security during the development procedure of web application.To estimate the efficacy of present study,authors employed the Hesitant Fuzzy Analytic Hierarchy Process(H-Fuzzy AHP).The outcomes of our investigations and conclusions will be a useful reference for the web application developers in achieving a more secure and durable web application.