This work carried out a measurement study of the Ethereum Peer-to-Peer(P2P)network to gain a better understanding of the underlying nodes.Ethereum was applied because it pioneered distributed applications,smart contra...This work carried out a measurement study of the Ethereum Peer-to-Peer(P2P)network to gain a better understanding of the underlying nodes.Ethereum was applied because it pioneered distributed applications,smart contracts,and Web3.Moreover,its application layer language“Solidity”is widely used in smart contracts across different public and private blockchains.To this end,we wrote a new Ethereum client based on Geth to collect Ethereum node information.Moreover,various web scrapers have been written to collect nodes’historical data fromthe Internet Archive and the Wayback Machine project.The collected data has been compared with two other services that harvest the number of Ethereumnodes.Ourmethod has collectedmore than 30% more than the other services.The data trained a neural network model regarding time series to predict the number of online nodes in the future.Our findings show that there are less than 20% of the same nodes daily,indicating thatmost nodes in the network change frequently.It poses a question of the stability of the network.Furthermore,historical data shows that the top ten countries with Ethereum clients have not changed since 2016.The popular operating system of the underlying nodes has shifted from Windows to Linux over time,increasing node security.The results have also shown that the number of Middle East and North Africa(MENA)Ethereum nodes is neglected compared with nodes recorded from other regions.It opens the door for developing new mechanisms to encourage users from these regions to contribute to this technology.Finally,the model has been trained and demonstrated an accuracy of 92% in predicting the future number of nodes in the Ethereum network.展开更多
In standard iris recognition systems,a cooperative imaging framework is employed that includes a light source with a near-infrared wavelength to reveal iris texture,look-and-stare constraints,and a close distance requ...In standard iris recognition systems,a cooperative imaging framework is employed that includes a light source with a near-infrared wavelength to reveal iris texture,look-and-stare constraints,and a close distance requirement to the capture device.When these conditions are relaxed,the system’s performance significantly deteriorates due to segmentation and feature extraction problems.Herein,a novel segmentation algorithm is proposed to correctly detect the pupil and limbus boundaries of iris images captured in unconstrained environments.First,the algorithm scans the whole iris image in the Hue Saturation Value(HSV)color space for local maxima to detect the sclera region.The image quality is then assessed by computing global features in red,green and blue(RGB)space,as noisy images have heterogeneous characteristics.The iris images are accordingly classified into seven categories based on their global RGB intensities.After the classification process,the images are filtered,and adaptive thresholding is applied to enhance the global contrast and detect the outer iris ring.Finally,to characterize the pupil area,the algorithm scans the cropped outer ring region for local minima values to identify the darkest area in the iris ring.The experimental results show that our method outperforms existing segmentation techniques using the UBIRIS.v1 and v2 databases and achieved a segmentation accuracy of 99.32 on UBIRIS.v1 and an error rate of 1.59 on UBIRIS.v2.展开更多
To discover and identify the influential nodes in any complex network has been an important issue.It is a significant factor in order to control over the network.Through control on a network,any information can be spr...To discover and identify the influential nodes in any complex network has been an important issue.It is a significant factor in order to control over the network.Through control on a network,any information can be spread and stopped in a short span of time.Both targets can be achieved,since network of information can be extended and as well destroyed.So,information spread and community formation have become one of the most crucial issues in the world of SNA(Social Network Analysis).In this work,the complex network of twitter social network has been formalized and results are analyzed.For this purpose,different network metrics have been utilized.Visualization of the network is provided in its original form and then filter out(different percentages)from the network to eliminate the less impacting nodes and edges for better analysis.This network is analyzed according to different centrality measures,like edge-betweenness,betweenness centrality,closeness centrality and eigenvector centrality.Influential nodes are detected and their impact is observed on the network.The communities are analyzed in terms of network coverage considering theMinimum Spanning Tree,shortest path distribution and network diameter.It is found that these are the very effective ways to find influential and central nodes from such big social networks like Facebook,Instagram,Twitter,LinkedIn,etc.展开更多
Renewable energy is a safe and limitless energy source that can be utilized for heating,cooling,and other purposes.Wind energy is one of the most important renewable energy sources.Power fluctuation of wind turbines o...Renewable energy is a safe and limitless energy source that can be utilized for heating,cooling,and other purposes.Wind energy is one of the most important renewable energy sources.Power fluctuation of wind turbines occurs due to variation of wind velocity.A wind cube is used to decrease power fluctuation and increase the wind turbine’s power.The optimum design for a wind cube is the main contribution of this work.The decisive design parameters used to optimize the wind cube are its inner and outer radius,the roughness factor,and the height of the wind turbine hub.A Gradient-Based Optimizer(GBO)is used as a new metaheuristic algorithm in this problem.The objective function of this research includes two parts:the first part is to minimize the probability of generated energy loss,and the second is to minimize the cost of the wind turbine and wind cube.The Gradient-Based Optimizer(GBO)is applied to optimize the variables of two wind turbine types and the design of the wind cube.The metrological data of the Red Sea governorate of Egypt is used as a case study for this analysis.Based on the results,the optimum design of a wind cube is achieved,and an improvement in energy produced from the wind turbine with a wind cube will be compared with energy generated without a wind cube.The energy generated from a wind turbine with the optimized cube is more than 20 times that of a wind turbine without a wind cube for all cases studied.展开更多
Data encryption is essential in securing exchanged data between connected parties.Encryption is the process of transforming readable text into scrambled,unreadable text using secure keys.Stream ciphers are one type of...Data encryption is essential in securing exchanged data between connected parties.Encryption is the process of transforming readable text into scrambled,unreadable text using secure keys.Stream ciphers are one type of an encryption algorithm that relies on only one key for decryption and as well as encryption.Many existing encryption algorithms are developed based on either a mathematical foundation or on other biological,social or physical behaviours.One technique is to utilise the behavioural aspects of game theory in a stream cipher.In this paper,we introduce an enhanced Deoxyribonucleic acid(DNA)-coded stream cipher based on an iterated n-player prisoner’s dilemma paradigm.Our main goal is to contribute to adding more layers of randomness to the behaviour of the keystream generation process;these layers are inspired by the behaviour of multiple players playing a prisoner’s dilemma game.We implement parallelism to compensate for the additional processing time that may result fromadding these extra layers of randomness.The results show that our enhanced design passes the statistical tests and achieves an encryption throughput of about 1,877 Mbit/s,which makes it a feasible secure stream cipher.展开更多
Active queue management(AQM)methods manage the queued packets at the router buffer,prevent buffer congestion,and stabilize the network performance.The bursty nature of the traffic passing by the network routers and th...Active queue management(AQM)methods manage the queued packets at the router buffer,prevent buffer congestion,and stabilize the network performance.The bursty nature of the traffic passing by the network routers and the slake behavior of the existing AQM methods leads to unnecessary packet dropping.This paper proposes a fully adaptive active queue management(AAQM)method to maintain stable network performance,avoid congestion and packet loss,and eliminate unnecessary packet dropping.The proposed AAQM method is based on load and queue length indicators and uses an adaptive mechanism to adjust the dropping probability based on the buffer status.The proposed AAQM method adapts to single and multiclass traffic models.Extensive simulation results over two types of traffic showed that the proposed method achieved the best results compared to the existing methods,including Random Early Detection(RED),BLUE,Effective RED(ERED),Fuzzy RED(FRED),Fuzzy Gentle RED(FGRED),and Fuzzy BLUE(FBLUE).The proposed and compared methods achieved similar results with low or moderate traffic load.However,under high traffic load,the proposed AAQM method achieved the best rate of zero loss,similar to BLUE,compared to 0.01 for RED,0.27 for ERED,0.04 for FRED,0.12 for FGRED,and 0.44 for FBLUE.For throughput,the proposed AAQM method achieved the highest rate of 0.54,surpassing the BLUE method’s throughput of 0.43.For delay,the proposed AAQM method achieved the second-best delay of 28.51,while the BLUE method achieved the best delay of 13.18;however,the BLUE results are insufficient because of the low throughput.Consequently,the proposed AAQM method outperformed the compared methods with its superior throughput and acceptable delay.展开更多
The early detection of skin cancer,particularly melanoma,presents a substantial risk to human health.This study aims to examine the necessity of implementing efficient early detection systems through the utilization o...The early detection of skin cancer,particularly melanoma,presents a substantial risk to human health.This study aims to examine the necessity of implementing efficient early detection systems through the utilization of deep learning techniques.Nevertheless,the existing methods exhibit certain constraints in terms of accessibility,diagnostic precision,data availability,and scalability.To address these obstacles,we put out a lightweight model known as Smart MobiNet,which is derived from MobileNet and incorporates additional distinctive attributes.The model utilizes a multi-scale feature extraction methodology by using various convolutional layers.The ISIC 2019 dataset,sourced from the International Skin Imaging Collaboration,is employed in this study.Traditional data augmentation approaches are implemented to address the issue of model overfitting.In this study,we conduct experiments to evaluate and compare the performance of three different models,namely CNN,MobileNet,and Smart MobiNet,in the task of skin cancer detection.The findings of our study indicate that the proposed model outperforms other architectures,achieving an accuracy of 0.89.Furthermore,the model exhibits balanced precision,sensitivity,and F1 scores,all measuring at 0.90.This model serves as a vital instrument that assists clinicians efficiently and precisely detecting skin cancer.展开更多
Face verification systems are critical in a wide range of applications,such as security systems and biometric authentication.However,these systems are vulnerable to adversarial attacks,which can significantly compromi...Face verification systems are critical in a wide range of applications,such as security systems and biometric authentication.However,these systems are vulnerable to adversarial attacks,which can significantly compromise their accuracy and reliability.Adversarial attacks are designed to deceive the face verification system by adding subtle perturbations to the input images.These perturbations can be imperceptible to the human eye but can cause the systemtomisclassifyor fail torecognize thepersoninthe image.Toaddress this issue,weproposeanovel system called VeriFace that comprises two defense mechanisms,adversarial detection,and adversarial removal.The first mechanism,adversarial detection,is designed to identify whether an input image has been subjected to adversarial perturbations.The second mechanism,adversarial removal,is designed to remove these perturbations from the input image to ensure the face verification system can accurately recognize the person in the image.To evaluate the effectiveness of the VeriFace system,we conducted experiments on different types of adversarial attacks using the Labelled Faces in the Wild(LFW)dataset.Our results show that the VeriFace adversarial detector can accurately identify adversarial imageswith a high detection accuracy of 100%.Additionally,our proposedVeriFace adversarial removalmethod has a significantly lower attack success rate of 6.5%compared to state-of-the-art removalmethods.展开更多
-License plate recognition (LPR) is an image processing technology that is used to identify vehicles by their license plates. This paper presents a license plate recognition algorithm for Saudi car plates based on t...-License plate recognition (LPR) is an image processing technology that is used to identify vehicles by their license plates. This paper presents a license plate recognition algorithm for Saudi car plates based on the support vector machine (SVM) algorithm. The new algorithm is efficient in recognizing the vehicles from the Arabic part of the plate. The performance of the system has been investigated and analyzed. The recognition accuracy of the algorithm is about 93.3%.展开更多
Team Formation(TF)is considered one of the most significant problems in computer science and optimization.TF is defined as forming the best team of experts in a social network to complete a task with least cost.Many r...Team Formation(TF)is considered one of the most significant problems in computer science and optimization.TF is defined as forming the best team of experts in a social network to complete a task with least cost.Many real-world problems,such as task assignment,vehicle routing,nurse scheduling,resource allocation,and airline crew scheduling,are based on the TF problem.TF has been shown to be a Nondeterministic Polynomial time(NP)problem,and high-dimensional problem with several local optima that can be solved using efficient approximation algorithms.This paper proposes two improved swarm-based algorithms for solving team formation problem.The first algorithm,entitled Hybrid Heap-Based Optimizer with Simulated Annealing Algorithm(HBOSA),uses a single crossover operator to improve the performance of a standard heap-based optimizer(HBO)algorithm.It also employs the simulated annealing(SA)approach to improve model convergence and avoid local minima trapping.The second algorithm is the Chaotic Heap-based Optimizer Algorithm(CHBO).CHBO aids in the discovery of new solutions in the search space by directing particles to different regions of the search space.During HBO’s optimization process,a logistic chaotic map is used.The performance of the two proposed algorithms(HBOSA)and(CHBO)is evaluated using thirteen benchmark functions and tested in solving the TF problem with varying number of experts and skills.Furthermore,the proposed algorithms were compared to well-known optimization algorithms such as the Heap-Based Optimizer(HBO),Developed Simulated Annealing(DSA),Particle SwarmOptimization(PSO),GreyWolfOptimization(GWO),and Genetic Algorithm(GA).Finally,the proposed algorithms were applied to a real-world benchmark dataset known as the Internet Movie Database(IMDB).The simulation results revealed that the proposed algorithms outperformed the compared algorithms in terms of efficiency and performance,with fast convergence to the global minimum.展开更多
Big data is a vast amount of structured and unstructured data that must be dealt with on a regular basis.Dimensionality reduction is the process of converting a huge set of data into data with tiny dimensions so that ...Big data is a vast amount of structured and unstructured data that must be dealt with on a regular basis.Dimensionality reduction is the process of converting a huge set of data into data with tiny dimensions so that equal information may be expressed easily.These tactics are frequently utilized to improve classification or regression challenges while dealing with machine learning issues.To achieve dimensionality reduction for huge data sets,this paper offers a hybrid particle swarm optimization-rough set PSO-RS and Mayfly algorithm-rough set MA-RS.A novel hybrid strategy based on the Mayfly algorithm(MA)and the rough set(RS)is proposed in particular.The performance of the novel hybrid algorithm MA-RS is evaluated by solving six different data sets from the literature.The simulation results and comparison with common reduction methods demonstrate the proposed MARS algorithm’s capacity to handle a wide range of data sets.Finally,the rough set approach,as well as the hybrid optimization techniques PSO-RS and MARS,were applied to deal with the massive data problem.MA-hybrid RS’s method beats other classic dimensionality reduction techniques,according to the experimental results and statistical testing studies.展开更多
Software systems have been employed in many fields as a means to reduce human efforts;consequently,stakeholders are interested in more updates of their capabilities.Code smells arise as one of the obstacles in the sof...Software systems have been employed in many fields as a means to reduce human efforts;consequently,stakeholders are interested in more updates of their capabilities.Code smells arise as one of the obstacles in the software industry.They are characteristics of software source code that indicate a deeper problem in design.These smells appear not only in the design but also in software implementation.Code smells introduce bugs,affect software maintainability,and lead to higher maintenance costs.Uncovering code smells can be formulated as an optimization problem of finding the best detection rules.Although researchers have recommended different techniques to improve the accuracy of code smell detection,these methods are still unstable and need to be improved.Previous research has sought only to discover a few at a time(three or five types)and did not set rules for detecting their types.Our research improves code smell detection by applying a search-based technique;we use the Whale Optimization Algorithm as a classifier to find ideal detection rules.Applying this algorithm,the Fisher criterion is utilized as a fitness function to maximize the between-class distance over the withinclass variance.The proposed framework adopts if-then detection rules during the software development life cycle.Those rules identify the types for both medium and large projects.Experiments are conducted on five open-source software projects to discover nine smell types that mostly appear in codes.The proposed detection framework has an average of 94.24%precision and 93.4%recall.These accurate values are better than other search-based algorithms of the same field.The proposed framework improves code smell detection,which increases software quality while minimizing maintenance effort,time,and cost.Additionally,the resulting classification rules are analyzed to find the software metrics that differentiate the nine code smells.展开更多
Digit Recognition is an essential element of the process of scanning and converting documents into electronic format. In this work, a new Multiple-Cell Size (MCS) approach is being proposed for utilizing Histogram of ...Digit Recognition is an essential element of the process of scanning and converting documents into electronic format. In this work, a new Multiple-Cell Size (MCS) approach is being proposed for utilizing Histogram of Oriented Gradient (HOG) features and a Support Vector Machine (SVM) based classifier for efficient classification of Handwritten Digits. The HOG based technique is sensitive to the cell size selection used in the relevant feature extraction computations. Hence a new MCS approach has been used to perform HOG analysis and compute the HOG features. The system has been tested on the Benchmark MNIST Digit Database of handwritten digits and a classification accuracy of 99.36% has been achieved using an Independent Test set strategy. A Cross-Validation analysis of the classification system has also been performed using the 10-Fold Cross-Validation strategy and a 10-Fold classification accuracy of 99.26% has been obtained. The classification performance of the proposed system is superior to existing techniques using complex procedures since it has achieved at par or better results using simple operations in both the Feature Space and in the Classifier Space. The plots of the system’s Confusion Matrix and the Receiver Operating Characteristics (ROC) show evidence of the superior performance of the proposed new MCS HOG and SVM based digit classification system.展开更多
Boundary effect in digital pathology is a phenomenon where the tissue shapes of biopsy samples get distorted during the sampling process.The morphological pattern of an epithelial layer is greatly affected.Theoretical...Boundary effect in digital pathology is a phenomenon where the tissue shapes of biopsy samples get distorted during the sampling process.The morphological pattern of an epithelial layer is greatly affected.Theoretically,the shape deformation model can normalise the distortions,but it needs a 2D image.Curvatures theory,on the other hand,is not yet tested on digital pathology images.Therefore,this work proposed a curvature detection to reduce the boundary effects and estimates the epithelial layer.The boundary effect on the tissue surfaces is normalised using the frequency of a curve deviates from being a straight line.The epithelial layer’s depth is estimated from the tissue edges and the connected nucleolus only.Then,the textural and spatial features along the estimated layer are used for dysplastic tissue detection.The proposed method achieved better performance compared to the whole tissue regions in terms of detecting dysplastic tissue.The result shows a leap of kappa points from fair to a substantial agreement with the expert’s ground truth classification.The improved results demonstrate that curvatures have been effective in reducing the boundary effects on the epithelial layer of tissue.Thus,quantifying and classifying the morphological patterns for dysplasia can be automated.The textural and spatial features on the detected epithelial layer can capture the changes in tissue.展开更多
Deep Learning is a powerful technique that is widely applied to Image Recognition and Natural Language Processing tasks amongst many other tasks. In this work, we propose an efficient technique to utilize pre-trained ...Deep Learning is a powerful technique that is widely applied to Image Recognition and Natural Language Processing tasks amongst many other tasks. In this work, we propose an efficient technique to utilize pre-trained Convolutional Neural Network (CNN) architectures to extract powerful features from images for object recognition purposes. We have built on the existing concept of extending the learning from pre-trained CNNs to new databases through activations by proposing to consider multiple deep layers. We have exploited the progressive learning that happens at the various intermediate layers of the CNNs to construct Deep Multi-Layer (DM-L) based Feature Extraction vectors to achieve excellent object recognition performance. Two popular pre-trained CNN architecture models i.e. the VGG_16 and VGG_19 have been used in this work to extract the feature sets from 3 deep fully connected multiple layers namely “fc6”, “fc7” and “fc8” from inside the models for object recognition purposes. Using the Principal Component Analysis (PCA) technique, the Dimensionality of the DM-L feature vectors has been reduced to form powerful feature vectors that have been fed to an external Classifier Ensemble for classification instead of the Softmax based classification layers of the two original pre-trained CNN models. The proposed DM-L technique has been applied to the Benchmark Caltech-101 object recognition database. Conventional wisdom may suggest that feature extractions based on the deepest layer i.e. “fc8” compared to “fc6” will result in the best recognition performance but our results have proved it otherwise for the two considered models. Our experiments have revealed that for the two models under consideration, the “fc6” based feature vectors have achieved the best recognition performance. State-of-the-Art recognition performances of 91.17% and 91.35% have been achieved by utilizing the “fc6” based feature vectors for the VGG_16 and VGG_19 models respectively. The recognition performance has been achieved by considering 30 sample images per class whereas the proposed system is capable of achieving improved performance by considering all sample images per class. Our research shows that for feature extraction based on CNNs, multiple layers should be considered and then the best layer can be selected that maximizes the recognition performance.展开更多
The palm vein authentication technology is extremely safe, accurate and reliable as it uses the vascular patterns contained within the body to confirm personal identification. The pattern of veins in the palm is compl...The palm vein authentication technology is extremely safe, accurate and reliable as it uses the vascular patterns contained within the body to confirm personal identification. The pattern of veins in the palm is complex and unique to each individual. Its non-contact function gives it a healthful advantage over other biometric technologies. This paper presents an algebraic method for personal authentication and identification using internal contactless palm vein images. We use MATLAB image processing toolbox to enhance the palm vein images and employ coset decomposition concept to store and identify the encoded palm vein feature vectors. Experimental evidence shows the validation and influence of the proposed approach.展开更多
Improving learning outcome has always been an important motivating factor in educational inquiry. In a blended learning environment where e-learning and traditional face to face class tutoring are combined, there are ...Improving learning outcome has always been an important motivating factor in educational inquiry. In a blended learning environment where e-learning and traditional face to face class tutoring are combined, there are opportunities to explore the role of technology in improving student’s grades. A student’s performance is impacted by many factors such as engagement, self-regulation, peer interaction, tutor’s experience and tutors’ time involvement with students. Furthermore, e-course design factors such as providing personalized learning are an urgent requirement for improved learning process. In this paper, an artificial neural network model is introduced as a type of supervised learning, meaning that the network is provided with example input parameters of learning and the desired optimized and correct output for that input. We also describe, by utilizing e-learning interactions and social analytics how to use artificial neural network to produce a converging mathematical model. Then students’ performance can be efficiently predicted and so the danger of failing in an enrolled e-course should be reduced.展开更多
Biometric identification systems are principally related to the information security as well as data protection and encryption. The paper proposes a method to integrate biometrics data encryption and authentication in...Biometric identification systems are principally related to the information security as well as data protection and encryption. The paper proposes a method to integrate biometrics data encryption and authentication into error correction techniques. The normal methods of biometric templates matching are replaced by a more powerful and high quality identification approach based on Grobner bases computations. In the normal biometric systems, where the data are always noisy, an approximate matching is expected;however, our cryptographic method gives particularly exact matching.展开更多
Blind people have many tasks to do in their lives. However, blindness generates challenges for them to perform their tasks. Many blind persons use a traditional stick to move around and to perform their tasks. But the...Blind people have many tasks to do in their lives. However, blindness generates challenges for them to perform their tasks. Many blind persons use a traditional stick to move around and to perform their tasks. But the obstacles are not detected in traditional stick, it is ineffective for visually impaired people. The blind person has no idea what kind of objects or obstacles are in front of him/her. The blind person has no idea what size the object is or how far away he or she is from it. It is difficult for a blind person to get around. To assist people with vision impairment by making many of their daily tasks simple, comfortable, and organized, they will be able to recognize anything (an obstacle for blind people). A smart stick with mobile application can be used. One of the solutions is a mobile-based Internet of Things solution, which is a stick intended to assist visually impaired people to navigate more easily. It enables blindness and low vision people to navigate and carry out their daily tasks with ease and comfort. A technologically advanced blind stick that enables visually impaired people to move with ease. This paper proposes a system of software and hardware that helps visually impairment people to find their ways in an easy and comfortable way. The proposed system uses smart stick and mobile application to help blind and visually impaired people to identify objects (such as walls, tables, vehicles, people, etc.) in their ways, this can enable them to avoid these objects. In addition, as a result, the system will notify the user through sound from the smartphone. Finally, if he/she gets lost, he will be able to send an SMS with his/her GPS location.展开更多
Nowadays,with the advent of the age of Web 2.0,several social recommendation methods that use social network information have been proposed and achieved distinct developments.However,the most critical challenges for ...Nowadays,with the advent of the age of Web 2.0,several social recommendation methods that use social network information have been proposed and achieved distinct developments.However,the most critical challenges for the existing majority of these methods are:(1)They tend to utilize only the available social relation between users and deal just with the cold-start user issue.(2)Besides,these methods are suffering from the lack of exploitation of content information such as social tagging,which can provide various sources to extract the item information to overcome the cold-start item and improve the recommendation quality.In this paper,we investigated the efficiency of data fusion by integrating multi-source of information.First,two essential factors,user-side information,and item-side information,are identified.Second,we developed a novel social recommendation model called Two-Sided Regularization(TSR),which is based on the probabilistic matrix factorization method.Finally,the effective quantum-based similarity method is adapted to measure the similarity between users and between items into the proposed model.Experimental results on the real dataset show that our proposed model TSR addresses both of cold-start user and item issues and outperforms state-ofthe-art recommendation methods.These results indicate the importance of incorporating various sources of information in the recommendation process.展开更多
基金the Arab Open University for Funding this work through AOU Research Fund No.(AOURG-2023-006).
文摘This work carried out a measurement study of the Ethereum Peer-to-Peer(P2P)network to gain a better understanding of the underlying nodes.Ethereum was applied because it pioneered distributed applications,smart contracts,and Web3.Moreover,its application layer language“Solidity”is widely used in smart contracts across different public and private blockchains.To this end,we wrote a new Ethereum client based on Geth to collect Ethereum node information.Moreover,various web scrapers have been written to collect nodes’historical data fromthe Internet Archive and the Wayback Machine project.The collected data has been compared with two other services that harvest the number of Ethereumnodes.Ourmethod has collectedmore than 30% more than the other services.The data trained a neural network model regarding time series to predict the number of online nodes in the future.Our findings show that there are less than 20% of the same nodes daily,indicating thatmost nodes in the network change frequently.It poses a question of the stability of the network.Furthermore,historical data shows that the top ten countries with Ethereum clients have not changed since 2016.The popular operating system of the underlying nodes has shifted from Windows to Linux over time,increasing node security.The results have also shown that the number of Middle East and North Africa(MENA)Ethereum nodes is neglected compared with nodes recorded from other regions.It opens the door for developing new mechanisms to encourage users from these regions to contribute to this technology.Finally,the model has been trained and demonstrated an accuracy of 92% in predicting the future number of nodes in the Ethereum network.
基金The authors extend their appreciation to the Arab Open University,Saudi Arabia,for funding this work through AOU research fund No.AOURG-2023-009.
文摘In standard iris recognition systems,a cooperative imaging framework is employed that includes a light source with a near-infrared wavelength to reveal iris texture,look-and-stare constraints,and a close distance requirement to the capture device.When these conditions are relaxed,the system’s performance significantly deteriorates due to segmentation and feature extraction problems.Herein,a novel segmentation algorithm is proposed to correctly detect the pupil and limbus boundaries of iris images captured in unconstrained environments.First,the algorithm scans the whole iris image in the Hue Saturation Value(HSV)color space for local maxima to detect the sclera region.The image quality is then assessed by computing global features in red,green and blue(RGB)space,as noisy images have heterogeneous characteristics.The iris images are accordingly classified into seven categories based on their global RGB intensities.After the classification process,the images are filtered,and adaptive thresholding is applied to enhance the global contrast and detect the outer iris ring.Finally,to characterize the pupil area,the algorithm scans the cropped outer ring region for local minima values to identify the darkest area in the iris ring.The experimental results show that our method outperforms existing segmentation techniques using the UBIRIS.v1 and v2 databases and achieved a segmentation accuracy of 99.32 on UBIRIS.v1 and an error rate of 1.59 on UBIRIS.v2.
文摘To discover and identify the influential nodes in any complex network has been an important issue.It is a significant factor in order to control over the network.Through control on a network,any information can be spread and stopped in a short span of time.Both targets can be achieved,since network of information can be extended and as well destroyed.So,information spread and community formation have become one of the most crucial issues in the world of SNA(Social Network Analysis).In this work,the complex network of twitter social network has been formalized and results are analyzed.For this purpose,different network metrics have been utilized.Visualization of the network is provided in its original form and then filter out(different percentages)from the network to eliminate the less impacting nodes and edges for better analysis.This network is analyzed according to different centrality measures,like edge-betweenness,betweenness centrality,closeness centrality and eigenvector centrality.Influential nodes are detected and their impact is observed on the network.The communities are analyzed in terms of network coverage considering theMinimum Spanning Tree,shortest path distribution and network diameter.It is found that these are the very effective ways to find influential and central nodes from such big social networks like Facebook,Instagram,Twitter,LinkedIn,etc.
文摘Renewable energy is a safe and limitless energy source that can be utilized for heating,cooling,and other purposes.Wind energy is one of the most important renewable energy sources.Power fluctuation of wind turbines occurs due to variation of wind velocity.A wind cube is used to decrease power fluctuation and increase the wind turbine’s power.The optimum design for a wind cube is the main contribution of this work.The decisive design parameters used to optimize the wind cube are its inner and outer radius,the roughness factor,and the height of the wind turbine hub.A Gradient-Based Optimizer(GBO)is used as a new metaheuristic algorithm in this problem.The objective function of this research includes two parts:the first part is to minimize the probability of generated energy loss,and the second is to minimize the cost of the wind turbine and wind cube.The Gradient-Based Optimizer(GBO)is applied to optimize the variables of two wind turbine types and the design of the wind cube.The metrological data of the Red Sea governorate of Egypt is used as a case study for this analysis.Based on the results,the optimum design of a wind cube is achieved,and an improvement in energy produced from the wind turbine with a wind cube will be compared with energy generated without a wind cube.The energy generated from a wind turbine with the optimized cube is more than 20 times that of a wind turbine without a wind cube for all cases studied.
文摘Data encryption is essential in securing exchanged data between connected parties.Encryption is the process of transforming readable text into scrambled,unreadable text using secure keys.Stream ciphers are one type of an encryption algorithm that relies on only one key for decryption and as well as encryption.Many existing encryption algorithms are developed based on either a mathematical foundation or on other biological,social or physical behaviours.One technique is to utilise the behavioural aspects of game theory in a stream cipher.In this paper,we introduce an enhanced Deoxyribonucleic acid(DNA)-coded stream cipher based on an iterated n-player prisoner’s dilemma paradigm.Our main goal is to contribute to adding more layers of randomness to the behaviour of the keystream generation process;these layers are inspired by the behaviour of multiple players playing a prisoner’s dilemma game.We implement parallelism to compensate for the additional processing time that may result fromadding these extra layers of randomness.The results show that our enhanced design passes the statistical tests and achieves an encryption throughput of about 1,877 Mbit/s,which makes it a feasible secure stream cipher.
基金funded by Arab Open University Grant Number(AOURG2023–005).
文摘Active queue management(AQM)methods manage the queued packets at the router buffer,prevent buffer congestion,and stabilize the network performance.The bursty nature of the traffic passing by the network routers and the slake behavior of the existing AQM methods leads to unnecessary packet dropping.This paper proposes a fully adaptive active queue management(AAQM)method to maintain stable network performance,avoid congestion and packet loss,and eliminate unnecessary packet dropping.The proposed AAQM method is based on load and queue length indicators and uses an adaptive mechanism to adjust the dropping probability based on the buffer status.The proposed AAQM method adapts to single and multiclass traffic models.Extensive simulation results over two types of traffic showed that the proposed method achieved the best results compared to the existing methods,including Random Early Detection(RED),BLUE,Effective RED(ERED),Fuzzy RED(FRED),Fuzzy Gentle RED(FGRED),and Fuzzy BLUE(FBLUE).The proposed and compared methods achieved similar results with low or moderate traffic load.However,under high traffic load,the proposed AAQM method achieved the best rate of zero loss,similar to BLUE,compared to 0.01 for RED,0.27 for ERED,0.04 for FRED,0.12 for FGRED,and 0.44 for FBLUE.For throughput,the proposed AAQM method achieved the highest rate of 0.54,surpassing the BLUE method’s throughput of 0.43.For delay,the proposed AAQM method achieved the second-best delay of 28.51,while the BLUE method achieved the best delay of 13.18;however,the BLUE results are insufficient because of the low throughput.Consequently,the proposed AAQM method outperformed the compared methods with its superior throughput and acceptable delay.
文摘The early detection of skin cancer,particularly melanoma,presents a substantial risk to human health.This study aims to examine the necessity of implementing efficient early detection systems through the utilization of deep learning techniques.Nevertheless,the existing methods exhibit certain constraints in terms of accessibility,diagnostic precision,data availability,and scalability.To address these obstacles,we put out a lightweight model known as Smart MobiNet,which is derived from MobileNet and incorporates additional distinctive attributes.The model utilizes a multi-scale feature extraction methodology by using various convolutional layers.The ISIC 2019 dataset,sourced from the International Skin Imaging Collaboration,is employed in this study.Traditional data augmentation approaches are implemented to address the issue of model overfitting.In this study,we conduct experiments to evaluate and compare the performance of three different models,namely CNN,MobileNet,and Smart MobiNet,in the task of skin cancer detection.The findings of our study indicate that the proposed model outperforms other architectures,achieving an accuracy of 0.89.Furthermore,the model exhibits balanced precision,sensitivity,and F1 scores,all measuring at 0.90.This model serves as a vital instrument that assists clinicians efficiently and precisely detecting skin cancer.
基金funded by Institutional Fund Projects under Grant No.(IFPIP:329-611-1443)the technical and financial support provided by the Ministry of Education and King Abdulaziz University,DSR,Jeddah,Saudi Arabia.
文摘Face verification systems are critical in a wide range of applications,such as security systems and biometric authentication.However,these systems are vulnerable to adversarial attacks,which can significantly compromise their accuracy and reliability.Adversarial attacks are designed to deceive the face verification system by adding subtle perturbations to the input images.These perturbations can be imperceptible to the human eye but can cause the systemtomisclassifyor fail torecognize thepersoninthe image.Toaddress this issue,weproposeanovel system called VeriFace that comprises two defense mechanisms,adversarial detection,and adversarial removal.The first mechanism,adversarial detection,is designed to identify whether an input image has been subjected to adversarial perturbations.The second mechanism,adversarial removal,is designed to remove these perturbations from the input image to ensure the face verification system can accurately recognize the person in the image.To evaluate the effectiveness of the VeriFace system,we conducted experiments on different types of adversarial attacks using the Labelled Faces in the Wild(LFW)dataset.Our results show that the VeriFace adversarial detector can accurately identify adversarial imageswith a high detection accuracy of 100%.Additionally,our proposedVeriFace adversarial removalmethod has a significantly lower attack success rate of 6.5%compared to state-of-the-art removalmethods.
文摘-License plate recognition (LPR) is an image processing technology that is used to identify vehicles by their license plates. This paper presents a license plate recognition algorithm for Saudi car plates based on the support vector machine (SVM) algorithm. The new algorithm is efficient in recognizing the vehicles from the Arabic part of the plate. The performance of the system has been investigated and analyzed. The recognition accuracy of the algorithm is about 93.3%.
文摘Team Formation(TF)is considered one of the most significant problems in computer science and optimization.TF is defined as forming the best team of experts in a social network to complete a task with least cost.Many real-world problems,such as task assignment,vehicle routing,nurse scheduling,resource allocation,and airline crew scheduling,are based on the TF problem.TF has been shown to be a Nondeterministic Polynomial time(NP)problem,and high-dimensional problem with several local optima that can be solved using efficient approximation algorithms.This paper proposes two improved swarm-based algorithms for solving team formation problem.The first algorithm,entitled Hybrid Heap-Based Optimizer with Simulated Annealing Algorithm(HBOSA),uses a single crossover operator to improve the performance of a standard heap-based optimizer(HBO)algorithm.It also employs the simulated annealing(SA)approach to improve model convergence and avoid local minima trapping.The second algorithm is the Chaotic Heap-based Optimizer Algorithm(CHBO).CHBO aids in the discovery of new solutions in the search space by directing particles to different regions of the search space.During HBO’s optimization process,a logistic chaotic map is used.The performance of the two proposed algorithms(HBOSA)and(CHBO)is evaluated using thirteen benchmark functions and tested in solving the TF problem with varying number of experts and skills.Furthermore,the proposed algorithms were compared to well-known optimization algorithms such as the Heap-Based Optimizer(HBO),Developed Simulated Annealing(DSA),Particle SwarmOptimization(PSO),GreyWolfOptimization(GWO),and Genetic Algorithm(GA).Finally,the proposed algorithms were applied to a real-world benchmark dataset known as the Internet Movie Database(IMDB).The simulation results revealed that the proposed algorithms outperformed the compared algorithms in terms of efficiency and performance,with fast convergence to the global minimum.
文摘Big data is a vast amount of structured and unstructured data that must be dealt with on a regular basis.Dimensionality reduction is the process of converting a huge set of data into data with tiny dimensions so that equal information may be expressed easily.These tactics are frequently utilized to improve classification or regression challenges while dealing with machine learning issues.To achieve dimensionality reduction for huge data sets,this paper offers a hybrid particle swarm optimization-rough set PSO-RS and Mayfly algorithm-rough set MA-RS.A novel hybrid strategy based on the Mayfly algorithm(MA)and the rough set(RS)is proposed in particular.The performance of the novel hybrid algorithm MA-RS is evaluated by solving six different data sets from the literature.The simulation results and comparison with common reduction methods demonstrate the proposed MARS algorithm’s capacity to handle a wide range of data sets.Finally,the rough set approach,as well as the hybrid optimization techniques PSO-RS and MARS,were applied to deal with the massive data problem.MA-hybrid RS’s method beats other classic dimensionality reduction techniques,according to the experimental results and statistical testing studies.
文摘Software systems have been employed in many fields as a means to reduce human efforts;consequently,stakeholders are interested in more updates of their capabilities.Code smells arise as one of the obstacles in the software industry.They are characteristics of software source code that indicate a deeper problem in design.These smells appear not only in the design but also in software implementation.Code smells introduce bugs,affect software maintainability,and lead to higher maintenance costs.Uncovering code smells can be formulated as an optimization problem of finding the best detection rules.Although researchers have recommended different techniques to improve the accuracy of code smell detection,these methods are still unstable and need to be improved.Previous research has sought only to discover a few at a time(three or five types)and did not set rules for detecting their types.Our research improves code smell detection by applying a search-based technique;we use the Whale Optimization Algorithm as a classifier to find ideal detection rules.Applying this algorithm,the Fisher criterion is utilized as a fitness function to maximize the between-class distance over the withinclass variance.The proposed framework adopts if-then detection rules during the software development life cycle.Those rules identify the types for both medium and large projects.Experiments are conducted on five open-source software projects to discover nine smell types that mostly appear in codes.The proposed detection framework has an average of 94.24%precision and 93.4%recall.These accurate values are better than other search-based algorithms of the same field.The proposed framework improves code smell detection,which increases software quality while minimizing maintenance effort,time,and cost.Additionally,the resulting classification rules are analyzed to find the software metrics that differentiate the nine code smells.
文摘Digit Recognition is an essential element of the process of scanning and converting documents into electronic format. In this work, a new Multiple-Cell Size (MCS) approach is being proposed for utilizing Histogram of Oriented Gradient (HOG) features and a Support Vector Machine (SVM) based classifier for efficient classification of Handwritten Digits. The HOG based technique is sensitive to the cell size selection used in the relevant feature extraction computations. Hence a new MCS approach has been used to perform HOG analysis and compute the HOG features. The system has been tested on the Benchmark MNIST Digit Database of handwritten digits and a classification accuracy of 99.36% has been achieved using an Independent Test set strategy. A Cross-Validation analysis of the classification system has also been performed using the 10-Fold Cross-Validation strategy and a 10-Fold classification accuracy of 99.26% has been obtained. The classification performance of the proposed system is superior to existing techniques using complex procedures since it has achieved at par or better results using simple operations in both the Feature Space and in the Classifier Space. The plots of the system’s Confusion Matrix and the Receiver Operating Characteristics (ROC) show evidence of the superior performance of the proposed new MCS HOG and SVM based digit classification system.
基金supported by the Center for Research and Innovation Management[CRIM]Universiti Kebangsaan Malaysia[Grant No.FRGS-1-2019-ICT02-UKM02-6]and Ministry of Higher Education Malaysia.
文摘Boundary effect in digital pathology is a phenomenon where the tissue shapes of biopsy samples get distorted during the sampling process.The morphological pattern of an epithelial layer is greatly affected.Theoretically,the shape deformation model can normalise the distortions,but it needs a 2D image.Curvatures theory,on the other hand,is not yet tested on digital pathology images.Therefore,this work proposed a curvature detection to reduce the boundary effects and estimates the epithelial layer.The boundary effect on the tissue surfaces is normalised using the frequency of a curve deviates from being a straight line.The epithelial layer’s depth is estimated from the tissue edges and the connected nucleolus only.Then,the textural and spatial features along the estimated layer are used for dysplastic tissue detection.The proposed method achieved better performance compared to the whole tissue regions in terms of detecting dysplastic tissue.The result shows a leap of kappa points from fair to a substantial agreement with the expert’s ground truth classification.The improved results demonstrate that curvatures have been effective in reducing the boundary effects on the epithelial layer of tissue.Thus,quantifying and classifying the morphological patterns for dysplasia can be automated.The textural and spatial features on the detected epithelial layer can capture the changes in tissue.
文摘Deep Learning is a powerful technique that is widely applied to Image Recognition and Natural Language Processing tasks amongst many other tasks. In this work, we propose an efficient technique to utilize pre-trained Convolutional Neural Network (CNN) architectures to extract powerful features from images for object recognition purposes. We have built on the existing concept of extending the learning from pre-trained CNNs to new databases through activations by proposing to consider multiple deep layers. We have exploited the progressive learning that happens at the various intermediate layers of the CNNs to construct Deep Multi-Layer (DM-L) based Feature Extraction vectors to achieve excellent object recognition performance. Two popular pre-trained CNN architecture models i.e. the VGG_16 and VGG_19 have been used in this work to extract the feature sets from 3 deep fully connected multiple layers namely “fc6”, “fc7” and “fc8” from inside the models for object recognition purposes. Using the Principal Component Analysis (PCA) technique, the Dimensionality of the DM-L feature vectors has been reduced to form powerful feature vectors that have been fed to an external Classifier Ensemble for classification instead of the Softmax based classification layers of the two original pre-trained CNN models. The proposed DM-L technique has been applied to the Benchmark Caltech-101 object recognition database. Conventional wisdom may suggest that feature extractions based on the deepest layer i.e. “fc8” compared to “fc6” will result in the best recognition performance but our results have proved it otherwise for the two considered models. Our experiments have revealed that for the two models under consideration, the “fc6” based feature vectors have achieved the best recognition performance. State-of-the-Art recognition performances of 91.17% and 91.35% have been achieved by utilizing the “fc6” based feature vectors for the VGG_16 and VGG_19 models respectively. The recognition performance has been achieved by considering 30 sample images per class whereas the proposed system is capable of achieving improved performance by considering all sample images per class. Our research shows that for feature extraction based on CNNs, multiple layers should be considered and then the best layer can be selected that maximizes the recognition performance.
文摘The palm vein authentication technology is extremely safe, accurate and reliable as it uses the vascular patterns contained within the body to confirm personal identification. The pattern of veins in the palm is complex and unique to each individual. Its non-contact function gives it a healthful advantage over other biometric technologies. This paper presents an algebraic method for personal authentication and identification using internal contactless palm vein images. We use MATLAB image processing toolbox to enhance the palm vein images and employ coset decomposition concept to store and identify the encoded palm vein feature vectors. Experimental evidence shows the validation and influence of the proposed approach.
文摘Improving learning outcome has always been an important motivating factor in educational inquiry. In a blended learning environment where e-learning and traditional face to face class tutoring are combined, there are opportunities to explore the role of technology in improving student’s grades. A student’s performance is impacted by many factors such as engagement, self-regulation, peer interaction, tutor’s experience and tutors’ time involvement with students. Furthermore, e-course design factors such as providing personalized learning are an urgent requirement for improved learning process. In this paper, an artificial neural network model is introduced as a type of supervised learning, meaning that the network is provided with example input parameters of learning and the desired optimized and correct output for that input. We also describe, by utilizing e-learning interactions and social analytics how to use artificial neural network to produce a converging mathematical model. Then students’ performance can be efficiently predicted and so the danger of failing in an enrolled e-course should be reduced.
文摘Biometric identification systems are principally related to the information security as well as data protection and encryption. The paper proposes a method to integrate biometrics data encryption and authentication into error correction techniques. The normal methods of biometric templates matching are replaced by a more powerful and high quality identification approach based on Grobner bases computations. In the normal biometric systems, where the data are always noisy, an approximate matching is expected;however, our cryptographic method gives particularly exact matching.
文摘Blind people have many tasks to do in their lives. However, blindness generates challenges for them to perform their tasks. Many blind persons use a traditional stick to move around and to perform their tasks. But the obstacles are not detected in traditional stick, it is ineffective for visually impaired people. The blind person has no idea what kind of objects or obstacles are in front of him/her. The blind person has no idea what size the object is or how far away he or she is from it. It is difficult for a blind person to get around. To assist people with vision impairment by making many of their daily tasks simple, comfortable, and organized, they will be able to recognize anything (an obstacle for blind people). A smart stick with mobile application can be used. One of the solutions is a mobile-based Internet of Things solution, which is a stick intended to assist visually impaired people to navigate more easily. It enables blindness and low vision people to navigate and carry out their daily tasks with ease and comfort. A technologically advanced blind stick that enables visually impaired people to move with ease. This paper proposes a system of software and hardware that helps visually impairment people to find their ways in an easy and comfortable way. The proposed system uses smart stick and mobile application to help blind and visually impaired people to identify objects (such as walls, tables, vehicles, people, etc.) in their ways, this can enable them to avoid these objects. In addition, as a result, the system will notify the user through sound from the smartphone. Finally, if he/she gets lost, he will be able to send an SMS with his/her GPS location.
文摘Nowadays,with the advent of the age of Web 2.0,several social recommendation methods that use social network information have been proposed and achieved distinct developments.However,the most critical challenges for the existing majority of these methods are:(1)They tend to utilize only the available social relation between users and deal just with the cold-start user issue.(2)Besides,these methods are suffering from the lack of exploitation of content information such as social tagging,which can provide various sources to extract the item information to overcome the cold-start item and improve the recommendation quality.In this paper,we investigated the efficiency of data fusion by integrating multi-source of information.First,two essential factors,user-side information,and item-side information,are identified.Second,we developed a novel social recommendation model called Two-Sided Regularization(TSR),which is based on the probabilistic matrix factorization method.Finally,the effective quantum-based similarity method is adapted to measure the similarity between users and between items into the proposed model.Experimental results on the real dataset show that our proposed model TSR addresses both of cold-start user and item issues and outperforms state-ofthe-art recommendation methods.These results indicate the importance of incorporating various sources of information in the recommendation process.