Wireless sensor networks(WSN)gather information and sense information samples in a certain region and communicate these readings to a base station(BS).Energy efficiency is considered a major design issue in the WSNs,a...Wireless sensor networks(WSN)gather information and sense information samples in a certain region and communicate these readings to a base station(BS).Energy efficiency is considered a major design issue in the WSNs,and can be addressed using clustering and routing techniques.Information is sent from the source to the BS via routing procedures.However,these routing protocols must ensure that packets are delivered securely,guaranteeing that neither adversaries nor unauthentic individuals have access to the sent information.Secure data transfer is intended to protect the data from illegal access,damage,or disruption.Thus,in the proposed model,secure data transmission is developed in an energy-effective manner.A low-energy adaptive clustering hierarchy(LEACH)is developed to efficiently transfer the data.For the intrusion detection systems(IDS),Fuzzy logic and artificial neural networks(ANNs)are proposed.Initially,the nodes were randomly placed in the network and initialized to gather information.To ensure fair energy dissipation between the nodes,LEACH randomly chooses cluster heads(CHs)and allocates this role to the various nodes based on a round-robin management mechanism.The intrusion-detection procedure was then utilized to determine whether intruders were present in the network.Within the WSN,a Fuzzy interference rule was utilized to distinguish the malicious nodes from legal nodes.Subsequently,an ANN was employed to distinguish the harmful nodes from suspicious nodes.The effectiveness of the proposed approach was validated using metrics that attained 97%accuracy,97%specificity,and 97%sensitivity of 95%.Thus,it was proved that the LEACH and Fuzzy-based IDS approaches are the best choices for securing data transmission in an energy-efficient manner.展开更多
In this paper, we present a protocol, CEWEC (Collaborative, Event-Triggered, Weighted, Energy-Efficient Clustering) , based on collaborative beamfor^ning. It is designed for wireless sensor nodes to realize the long...In this paper, we present a protocol, CEWEC (Collaborative, Event-Triggered, Weighted, Energy-Efficient Clustering) , based on collaborative beamfor^ning. It is designed for wireless sensor nodes to realize the long-distance transmission. In order to save the energy of sensor nodes, a node "wakes up "when it has data to be uploaded. In our protocol, multi-layer structure is adopted: trigger-node layers, clusterhead-node layers, child- node layers. The number of child nodes and clusterheads depends on the distance of transmission. Clusterheads are selected according to the node 5 s weight which is based on its residual energy and distance to the trigger node. The main characteristic of this protocol is that clusterheads can directly communication with each other without the large-scale base station and antennas. Thus, the data from the trigger node would be able to be shared within the multi-layer structure. Considering the clustering process, energy model, and success rate, the simulation results show that the CEWEC protocol can effectively manage a large number of sensor nodes to share and transmit data.展开更多
Wireless sensor networks(WSNs)are made up of several sensors located in a specific area and powered by a finite amount of energy to gather environmental data.WSNs use sensor nodes(SNs)to collect and transmit data.Howe...Wireless sensor networks(WSNs)are made up of several sensors located in a specific area and powered by a finite amount of energy to gather environmental data.WSNs use sensor nodes(SNs)to collect and transmit data.However,the power supplied by the sensor network is restricted.Thus,SNs must store energy as often as to extend the lifespan of the network.In the proposed study,effective clustering and longer network lifetimes are achieved using mul-ti-swarm optimization(MSO)and game theory based on locust search(LS-II).In this research,MSO is used to improve the optimum routing,while the LS-II approach is employed to specify the number of cluster heads(CHs)and select the best ones.After the CHs are identified,the other sensor components are allo-cated to the closest CHs to them.A game theory-based energy-efficient clustering approach is applied to WSNs.Here each SN is considered a player in the game.The SN can implement beneficial methods for itself depending on the length of the idle listening time in the active phase and then determine to choose whether or not to rest.The proposed multi-swarm with energy-efficient game theory on locust search(MSGE-LS)efficiently selects CHs,minimizes energy consumption,and improves the lifetime of networks.The findings of this study indicate that the proposed MSGE-LS is an effective method because its result proves that it increases the number of clusters,average energy consumption,lifespan extension,reduction in average packet loss,and end-to-end delay.展开更多
Many fields,such as neuroscience,are experiencing the vast prolife ration of cellular data,underscoring the need fo r organizing and interpreting large datasets.A popular approach partitions data into manageable subse...Many fields,such as neuroscience,are experiencing the vast prolife ration of cellular data,underscoring the need fo r organizing and interpreting large datasets.A popular approach partitions data into manageable subsets via hierarchical clustering,but objective methods to determine the appropriate classification granularity are missing.We recently introduced a technique to systematically identify when to stop subdividing clusters based on the fundamental principle that cells must differ more between than within clusters.Here we present the corresponding protocol to classify cellular datasets by combining datadriven unsupervised hierarchical clustering with statistical testing.These general-purpose functions are applicable to any cellular dataset that can be organized as two-dimensional matrices of numerical values,including molecula r,physiological,and anatomical datasets.We demonstrate the protocol using cellular data from the Janelia MouseLight project to chara cterize morphological aspects of neurons.展开更多
Wireless Sensor Networks(WSNs)are a collection of sensor nodes distributed in space and connected through wireless communication.The sensor nodes gather and store data about the real world around them.However,the node...Wireless Sensor Networks(WSNs)are a collection of sensor nodes distributed in space and connected through wireless communication.The sensor nodes gather and store data about the real world around them.However,the nodes that are dependent on batteries will ultimately suffer an energy loss with time,which affects the lifetime of the network.This research proposes to achieve its primary goal by reducing energy consumption and increasing the network’s lifetime and stability.The present technique employs the hybrid Mayfly Optimization Algorithm-Enhanced Ant Colony Optimization(MFOA-EACO),where the Mayfly Optimization Algorithm(MFOA)is used to select the best cluster head(CH)from a set of nodes,and the Enhanced Ant Colony Optimization(EACO)technique is used to determine an optimal route between the cluster head and base station.The performance evaluation of our suggested hybrid approach is based on many parameters,including the number of active and dead nodes,node degree,distance,and energy usage.Our objective is to integrate MFOA-EACO to enhance energy efficiency and extend the network life of the WSN in the future.The proposed method outcomes proved to be better than traditional approaches such as Hybrid Squirrel-Flying Fox Optimization Algorithm(HSFLBOA),Hybrid Social Reindeer Optimization and Differential Evolution-Firefly Algorithm(HSRODE-FFA),Social Spider Distance Sensitive-Iterative Antlion Butterfly Cockroach Algorithm(SADSS-IABCA),and Energy Efficient Clustering Hierarchy Strategy-Improved Social Spider Algorithm Differential Evolution(EECHS-ISSADE).展开更多
Energy efficiency and sensing coverage are essential metrics for enhancing the lifetime and the utilization of wireless sensor networks. Many protocols have been developed to address these issues, among which, cluster...Energy efficiency and sensing coverage are essential metrics for enhancing the lifetime and the utilization of wireless sensor networks. Many protocols have been developed to address these issues, among which, clustering is considered a key technique in minimizing the consumed energy. However, few clustering protocols address the sensing coverage metric. This paper proposes a general framework that addresses both metrics for clustering algorithms in wireless sensor networks. The proposed framework is based on applying the principles of Virtual Field Force on each cluster within the network in order to move the sensor nodes towards proper locations that maximize the sensing coverage and minimize the transmitted energy. Two types of virtual forces are used: an attractive force that moves the nodes towards the cluster head in order to reduce the energy used for communication and a repulsive force that moves the overlapping nodes away from each other such that their sensing coverage is maximized. The performance of the proposed mechanism was evaluated by applying it to the well-known LEACH clustering algorithm. The simulation results demonstrate that the proposed mechanism improves the performance of the LEACH protocol considerably in terms of the achieved sensing coverage, and the network lifetime.展开更多
Through Wireless Sensor Networks(WSN)formation,industrial and academic communities have seen remarkable development in recent decades.One of the most common techniques to derive the best out of wireless sensor network...Through Wireless Sensor Networks(WSN)formation,industrial and academic communities have seen remarkable development in recent decades.One of the most common techniques to derive the best out of wireless sensor networks is to upgrade the operating group.The most important problem is the arrangement of optimal number of sensor nodes as clusters to discuss clustering method.In this method,new client nodes and dynamic methods are used to determine the optimal number of clusters and cluster heads which are to be better organized and proposed to classify each round.Parameters of effective energy use and the ability to decide the best method of attachments are included.The Problem coverage find change ability network route due to which traffic and delays keep the performance to be very high.A newer version of Gravity Analysis Algorithm(GAA)is used to solve this problem.This proposed new approach GAA is introduced to improve network lifetime,increase system energy efficiency and end delay performance.Simulation results show that modified GAA performance is better than other networks and it has more advanced Life Time Delay Clustering Algorithms-LTDCA protocols.The proposed method provides a set of data collection and increased throughput in wireless sensor networks.展开更多
The study delves into the expanding role of network platforms in our daily lives, encompassing various mediums like blogs, forums, online chats, and prominent social media platforms such as Facebook, Twitter, and Inst...The study delves into the expanding role of network platforms in our daily lives, encompassing various mediums like blogs, forums, online chats, and prominent social media platforms such as Facebook, Twitter, and Instagram. While these platforms offer avenues for self-expression and community support, they concurrently harbor negative impacts, fostering antisocial behaviors like phishing, impersonation, hate speech, cyberbullying, cyberstalking, cyberterrorism, fake news propagation, spamming, and fraud. Notably, individuals also leverage these platforms to connect with authorities and seek aid during disasters. The overarching objective of this research is to address the dual nature of network platforms by proposing innovative methodologies aimed at enhancing their positive aspects and mitigating their negative repercussions. To achieve this, the study introduces a weight learning method grounded in multi-linear attribute ranking. This approach serves to evaluate the significance of attribute combinations across all feature spaces. Additionally, a novel clustering method based on tensors is proposed to elevate the quality of clustering while effectively distinguishing selected features. The methodology incorporates a weighted average similarity matrix and optionally integrates weighted Euclidean distance, contributing to a more nuanced understanding of attribute importance. The analysis of the proposed methods yields significant findings. The weight learning method proves instrumental in discerning the importance of attribute combinations, shedding light on key aspects within feature spaces. Simultaneously, the clustering method based on tensors exhibits improved efficacy in enhancing clustering quality and feature distinction. This not only advances our understanding of attribute importance but also paves the way for more nuanced data analysis methodologies. In conclusion, this research underscores the pivotal role of network platforms in contemporary society, emphasizing their potential for both positive contributions and adverse consequences. The proposed methodologies offer novel approaches to address these dualities, providing a foundation for future research and practical applications. Ultimately, this study contributes to the ongoing discourse on optimizing the utility of network platforms while minimizing their negative impacts.展开更多
The limited energy and high mobility of unmanned aerial vehicles(UAVs)lead to drastic topology changes in UAV formation.The existing routing protocols necessitate a large number of messages for route discovery and mai...The limited energy and high mobility of unmanned aerial vehicles(UAVs)lead to drastic topology changes in UAV formation.The existing routing protocols necessitate a large number of messages for route discovery and maintenance,greatly increasing network delay and control overhead.A energyefficient routing method based on the discrete timeaggregated graph(TAG)theory is proposed since UAV formation is a defined time-varying network.The network is characterized using the TAG,which utilizes the prior knowledge in UAV formation.An energyefficient routing algorithm is designed based on TAG,considering the link delay,relative mobility,and residual energy of UAVs.The routing path is determined with global network information before requesting communication.Simulation results demonstrate that the routing method can improve the end-to-end delay,packet delivery ratio,routing control overhead,and residual energy.Consequently,introducing timevarying graphs to design routing algorithms is more effective for UAV formation.展开更多
In clustering algorithms,the selection of neighbors significantly affects the quality of the final clustering results.While various neighbor relationships exist,such as K-nearest neighbors,natural neighbors,and shared...In clustering algorithms,the selection of neighbors significantly affects the quality of the final clustering results.While various neighbor relationships exist,such as K-nearest neighbors,natural neighbors,and shared neighbors,most neighbor relationships can only handle single structural relationships,and the identification accuracy is low for datasets with multiple structures.In life,people’s first instinct for complex things is to divide them into multiple parts to complete.Partitioning the dataset into more sub-graphs is a good idea approach to identifying complex structures.Taking inspiration from this,we propose a novel neighbor method:Shared Natural Neighbors(SNaN).To demonstrate the superiority of this neighbor method,we propose a shared natural neighbors-based hierarchical clustering algorithm for discovering arbitrary-shaped clusters(HC-SNaN).Our algorithm excels in identifying both spherical clusters and manifold clusters.Tested on synthetic datasets and real-world datasets,HC-SNaN demonstrates significant advantages over existing clustering algorithms,particularly when dealing with datasets containing arbitrary shapes.展开更多
Hotel buildings are currently among the largest energy consumers in the world.Heating,ventilation,and air conditioning are the most energy-intensive building systems,accounting for more than half of total energy consu...Hotel buildings are currently among the largest energy consumers in the world.Heating,ventilation,and air conditioning are the most energy-intensive building systems,accounting for more than half of total energy consumption.An energy audit is used to predict the weak points of a building’s energy use system.Various factors influence building energy consumption,which can be modified to achieve more energy-efficient strategies.In this study,an existing hotel building in Central Taiwan is evaluated by simulating several scenarios using energy modeling over a year.Energy modeling is conducted by using Autodesk Revit 2025.It was discovered from the results that arranging the lighting schedule based on the ASHRAE Standard 90.1 could save up to 8.22%of energy consumption.And then the results also revealed that changing the glazing of the building into double-layer lowemissivity glass could reduce energy consumption by 14.58%.While the energy consumption of the building could also be decreased to 7.20%by changing the building orientation to the north.Meanwhile,moving the building location to Northern Taiwan could also minimize the energy consumption of the building by 3.23%.The results revealed that the double layer offers better thermal insulation,and low-emissivity glass can lower energy consumption,electricity costs,and CO_(2)emissions by up to 15.27%annually.While adjusting orientation and location can enhance energy performance,this approach is impractical for existing buildings,but this could be considered for designing new buildings.The results showed the relevancy of energy performance to CO_(2)emission production and electricity expenses.展开更多
In this paper,we introduce a novel Multi-scale and Auto-tuned Semi-supervised Deep Subspace Clustering(MAS-DSC)algorithm,aimed at addressing the challenges of deep subspace clustering in high-dimensional real-world da...In this paper,we introduce a novel Multi-scale and Auto-tuned Semi-supervised Deep Subspace Clustering(MAS-DSC)algorithm,aimed at addressing the challenges of deep subspace clustering in high-dimensional real-world data,particularly in the field of medical imaging.Traditional deep subspace clustering algorithms,which are mostly unsupervised,are limited in their ability to effectively utilize the inherent prior knowledge in medical images.Our MAS-DSC algorithm incorporates a semi-supervised learning framework that uses a small amount of labeled data to guide the clustering process,thereby enhancing the discriminative power of the feature representations.Additionally,the multi-scale feature extraction mechanism is designed to adapt to the complexity of medical imaging data,resulting in more accurate clustering performance.To address the difficulty of hyperparameter selection in deep subspace clustering,this paper employs a Bayesian optimization algorithm for adaptive tuning of hyperparameters related to subspace clustering,prior knowledge constraints,and model loss weights.Extensive experiments on standard clustering datasets,including ORL,Coil20,and Coil100,validate the effectiveness of the MAS-DSC algorithm.The results show that with its multi-scale network structure and Bayesian hyperparameter optimization,MAS-DSC achieves excellent clustering results on these datasets.Furthermore,tests on a brain tumor dataset demonstrate the robustness of the algorithm and its ability to leverage prior knowledge for efficient feature extraction and enhanced clustering performance within a semi-supervised learning framework.展开更多
Traditional Fuzzy C-Means(FCM)and Possibilistic C-Means(PCM)clustering algorithms are data-driven,and their objective function minimization process is based on the available numeric data.Recently,knowledge hints have ...Traditional Fuzzy C-Means(FCM)and Possibilistic C-Means(PCM)clustering algorithms are data-driven,and their objective function minimization process is based on the available numeric data.Recently,knowledge hints have been introduced to formknowledge-driven clustering algorithms,which reveal a data structure that considers not only the relationships between data but also the compatibility with knowledge hints.However,these algorithms cannot produce the optimal number of clusters by the clustering algorithm itself;they require the assistance of evaluation indices.Moreover,knowledge hints are usually used as part of the data structure(directly replacing some clustering centers),which severely limits the flexibility of the algorithm and can lead to knowledgemisguidance.To solve this problem,this study designs a newknowledge-driven clustering algorithmcalled the PCM clusteringwith High-density Points(HP-PCM),in which domain knowledge is represented in the form of so-called high-density points.First,a newdatadensitycalculation function is proposed.The Density Knowledge Points Extraction(DKPE)method is established to filter out high-density points from the dataset to form knowledge hints.Then,these hints are incorporated into the PCM objective function so that the clustering algorithm is guided by high-density points to discover the natural data structure.Finally,the initial number of clusters is set to be greater than the true one based on the number of knowledge hints.Then,the HP-PCM algorithm automatically determines the final number of clusters during the clustering process by considering the cluster elimination mechanism.Through experimental studies,including some comparative analyses,the results highlight the effectiveness of the proposed algorithm,such as the increased success rate in clustering,the ability to determine the optimal cluster number,and the faster convergence speed.展开更多
Path-based clustering algorithms typically generate clusters by optimizing a benchmark function.Most optimiza-tion methods in clustering algorithms often offer solutions close to the general optimal value.This study a...Path-based clustering algorithms typically generate clusters by optimizing a benchmark function.Most optimiza-tion methods in clustering algorithms often offer solutions close to the general optimal value.This study achieves the global optimum value for the criterion function in a shorter time using the minimax distance,Maximum Spanning Tree“MST”,and meta-heuristic algorithms,including Genetic Algorithm“GA”and Particle Swarm Optimization“PSO”.The Fast Path-based Clustering“FPC”algorithm proposed in this paper can find cluster centers correctly in most datasets and quickly perform clustering operations.The FPC does this operation using MST,the minimax distance,and a new hybrid meta-heuristic algorithm in a few rounds of algorithm iterations.This algorithm can achieve the global optimal value,and the main clustering process of the algorithm has a computational complexity of O�k2×n�.However,due to the complexity of the minimum distance algorithm,the total computational complexity is O�n2�.Experimental results of FPC on synthetic datasets with arbitrary shapes demonstrate that the algorithm is resistant to noise and outliers and can correctly identify clusters of varying sizes and numbers.In addition,the FPC requires the number of clusters as the only parameter to perform the clustering process.A comparative analysis of FPC and other clustering algorithms in this domain indicates that FPC exhibits superior speed,stability,and performance.展开更多
The reduction of energy consumption is an increasingly important topic of the railway system.Energy-efficient train control(EETC)is one solution,which refers to mathematically computing when to accelerate,which cruisi...The reduction of energy consumption is an increasingly important topic of the railway system.Energy-efficient train control(EETC)is one solution,which refers to mathematically computing when to accelerate,which cruising speed to hold,how long one should coast over a suitable space,and when to brake.Most approaches in literature and industry greatly simplify a lot of nonlinear effects,such that they ignore mostly the losses due to energy conversion in traction components and auxiliaries.To fill this research gap,a series of increasingly detailed nonlinear losses is described and modelled.We categorize an increasing detail in this representation as four levels.We study the impact of those levels of detail on the energy optimal speed trajectory.To do this,a standard approach based on dynamic programming is used,given constraints on total travel time.This evaluation of multiple test cases highlights the influence of the dynamic losses and the power consumption of auxiliary components on railway trajectories,also compared to multiple benchmarks.The results show how the losses can make up 50%of the total energy consumption for an exemplary trip.Ignoring them would though result in consistent but limited errors in the optimal trajectory.Overall,more complex trajectories can result in less energy consumption when including the complexity of nonlinear losses than when a simpler model is considered.Those effects are stronger when the trajectory includes many acceleration and braking phases.展开更多
Contrastive learning is a significant research direction in the field of deep learning.However,existing data augmentation methods often lead to issues such as semantic drift in generated views while the complexity of ...Contrastive learning is a significant research direction in the field of deep learning.However,existing data augmentation methods often lead to issues such as semantic drift in generated views while the complexity of model pre-training limits further improvement in the performance of existing methods.To address these challenges,we propose the Efficient Clustering Network based on Matrix Factorization(ECN-MF).Specifically,we design a batched low-rank Singular Value Decomposition(SVD)algorithm for data augmentation to eliminate redundant information and uncover major patterns of variation and key information in the data.Additionally,we design a Mutual Information-Enhanced Clustering Module(MI-ECM)to accelerate the training process by leveraging a simple architecture to bring samples from the same cluster closer while pushing samples from other clusters apart.Extensive experiments on six datasets demonstrate that ECN-MF exhibits more effective performance compared to state-of-the-art algorithms.展开更多
Data stream clustering is integral to contemporary big data applications.However,addressing the ongoing influx of data streams efficiently and accurately remains a primary challenge in current research.This paper aims...Data stream clustering is integral to contemporary big data applications.However,addressing the ongoing influx of data streams efficiently and accurately remains a primary challenge in current research.This paper aims to elevate the efficiency and precision of data stream clustering,leveraging the TEDA(Typicality and Eccentricity Data Analysis)algorithm as a foundation,we introduce improvements by integrating a nearest neighbor search algorithm to enhance both the efficiency and accuracy of the algorithm.The original TEDA algorithm,grounded in the concept of“Typicality and Eccentricity Data Analytics”,represents an evolving and recursive method that requires no prior knowledge.While the algorithm autonomously creates and merges clusters as new data arrives,its efficiency is significantly hindered by the need to traverse all existing clusters upon the arrival of further data.This work presents the NS-TEDA(Neighbor Search Based Typicality and Eccentricity Data Analysis)algorithm by incorporating a KD-Tree(K-Dimensional Tree)algorithm integrated with the Scapegoat Tree.Upon arrival,this ensures that new data points interact solely with clusters in very close proximity.This significantly enhances algorithm efficiency while preventing a single data point from joining too many clusters and mitigating the merging of clusters with high overlap to some extent.We apply the NS-TEDA algorithm to several well-known datasets,comparing its performance with other data stream clustering algorithms and the original TEDA algorithm.The results demonstrate that the proposed algorithm achieves higher accuracy,and its runtime exhibits almost linear dependence on the volume of data,making it more suitable for large-scale data stream analysis research.展开更多
Hyperspectral imagery encompasses spectral and spatial dimensions,reflecting the material properties of objects.Its application proves crucial in search and rescue,concealed target identification,and crop growth analy...Hyperspectral imagery encompasses spectral and spatial dimensions,reflecting the material properties of objects.Its application proves crucial in search and rescue,concealed target identification,and crop growth analysis.Clustering is an important method of hyperspectral analysis.The vast data volume of hyperspectral imagery,coupled with redundant information,poses significant challenges in swiftly and accurately extracting features for subsequent analysis.The current hyperspectral feature clustering methods,which are mostly studied from space or spectrum,do not have strong interpretability,resulting in poor comprehensibility of the algorithm.So,this research introduces a feature clustering algorithm for hyperspectral imagery from an interpretability perspective.It commences with a simulated perception process,proposing an interpretable band selection algorithm to reduce data dimensions.Following this,amulti-dimensional clustering algorithm,rooted in fuzzy and kernel clustering,is developed to highlight intra-class similarities and inter-class differences.An optimized P systemis then introduced to enhance computational efficiency.This system coordinates all cells within a mapping space to compute optimal cluster centers,facilitating parallel computation.This approach diminishes sensitivity to initial cluster centers and augments global search capabilities,thus preventing entrapment in local minima and enhancing clustering performance.Experiments conducted on 300 datasets,comprising both real and simulated data.The results show that the average accuracy(ACC)of the proposed algorithm is 0.86 and the combination measure(CM)is 0.81.展开更多
Implementing machine learning algorithms in the non-conducive environment of the vehicular network requires some adaptations due to the high computational complexity of these algorithms.K-clustering algorithms are sim...Implementing machine learning algorithms in the non-conducive environment of the vehicular network requires some adaptations due to the high computational complexity of these algorithms.K-clustering algorithms are simplistic,with fast performance and relative accuracy.However,their implementation depends on the initial selection of clusters number(K),the initial clusters’centers,and the clustering metric.This paper investigated using Scott’s histogram formula to estimate the K number and the Link Expiration Time(LET)as a clustering metric.Realistic traffic flows were considered for three maps,namely Highway,Traffic Light junction,and Roundabout junction,to study the effect of road layout on estimating the K number.A fast version of the PAM algorithm was used for clustering with a modification to reduce time complexity.The Affinity propagation algorithm sets the baseline for the estimated K number,and the Medoid Silhouette method is used to quantify the clustering.OMNET++,Veins,and SUMO were used to simulate the traffic,while the related algorithms were implemented in Python.The Scott’s formula estimation of the K number only matched the baseline when the road layout was simple.Moreover,the clustering algorithm required one iteration on average to converge when used with LET.展开更多
文摘Wireless sensor networks(WSN)gather information and sense information samples in a certain region and communicate these readings to a base station(BS).Energy efficiency is considered a major design issue in the WSNs,and can be addressed using clustering and routing techniques.Information is sent from the source to the BS via routing procedures.However,these routing protocols must ensure that packets are delivered securely,guaranteeing that neither adversaries nor unauthentic individuals have access to the sent information.Secure data transfer is intended to protect the data from illegal access,damage,or disruption.Thus,in the proposed model,secure data transmission is developed in an energy-effective manner.A low-energy adaptive clustering hierarchy(LEACH)is developed to efficiently transfer the data.For the intrusion detection systems(IDS),Fuzzy logic and artificial neural networks(ANNs)are proposed.Initially,the nodes were randomly placed in the network and initialized to gather information.To ensure fair energy dissipation between the nodes,LEACH randomly chooses cluster heads(CHs)and allocates this role to the various nodes based on a round-robin management mechanism.The intrusion-detection procedure was then utilized to determine whether intruders were present in the network.Within the WSN,a Fuzzy interference rule was utilized to distinguish the malicious nodes from legal nodes.Subsequently,an ANN was employed to distinguish the harmful nodes from suspicious nodes.The effectiveness of the proposed approach was validated using metrics that attained 97%accuracy,97%specificity,and 97%sensitivity of 95%.Thus,it was proved that the LEACH and Fuzzy-based IDS approaches are the best choices for securing data transmission in an energy-efficient manner.
基金Sponsored by the National Natural Science Foundation of China(Grant No.61301100)
文摘In this paper, we present a protocol, CEWEC (Collaborative, Event-Triggered, Weighted, Energy-Efficient Clustering) , based on collaborative beamfor^ning. It is designed for wireless sensor nodes to realize the long-distance transmission. In order to save the energy of sensor nodes, a node "wakes up "when it has data to be uploaded. In our protocol, multi-layer structure is adopted: trigger-node layers, clusterhead-node layers, child- node layers. The number of child nodes and clusterheads depends on the distance of transmission. Clusterheads are selected according to the node 5 s weight which is based on its residual energy and distance to the trigger node. The main characteristic of this protocol is that clusterheads can directly communication with each other without the large-scale base station and antennas. Thus, the data from the trigger node would be able to be shared within the multi-layer structure. Considering the clustering process, energy model, and success rate, the simulation results show that the CEWEC protocol can effectively manage a large number of sensor nodes to share and transmit data.
基金This work was suppoted by Korea Institute for Advancement of Technology(KIAT)grant funded by the Korea Government(MOTIE)(P0012724,The Competency Development Program for Industry Specialist)the Soonchunhyang University Research Fund.
文摘Wireless sensor networks(WSNs)are made up of several sensors located in a specific area and powered by a finite amount of energy to gather environmental data.WSNs use sensor nodes(SNs)to collect and transmit data.However,the power supplied by the sensor network is restricted.Thus,SNs must store energy as often as to extend the lifespan of the network.In the proposed study,effective clustering and longer network lifetimes are achieved using mul-ti-swarm optimization(MSO)and game theory based on locust search(LS-II).In this research,MSO is used to improve the optimum routing,while the LS-II approach is employed to specify the number of cluster heads(CHs)and select the best ones.After the CHs are identified,the other sensor components are allo-cated to the closest CHs to them.A game theory-based energy-efficient clustering approach is applied to WSNs.Here each SN is considered a player in the game.The SN can implement beneficial methods for itself depending on the length of the idle listening time in the active phase and then determine to choose whether or not to rest.The proposed multi-swarm with energy-efficient game theory on locust search(MSGE-LS)efficiently selects CHs,minimizes energy consumption,and improves the lifetime of networks.The findings of this study indicate that the proposed MSGE-LS is an effective method because its result proves that it increases the number of clusters,average energy consumption,lifespan extension,reduction in average packet loss,and end-to-end delay.
基金supported in part by NIH grants R01NS39600,U01MH114829RF1MH128693(to GAA)。
文摘Many fields,such as neuroscience,are experiencing the vast prolife ration of cellular data,underscoring the need fo r organizing and interpreting large datasets.A popular approach partitions data into manageable subsets via hierarchical clustering,but objective methods to determine the appropriate classification granularity are missing.We recently introduced a technique to systematically identify when to stop subdividing clusters based on the fundamental principle that cells must differ more between than within clusters.Here we present the corresponding protocol to classify cellular datasets by combining datadriven unsupervised hierarchical clustering with statistical testing.These general-purpose functions are applicable to any cellular dataset that can be organized as two-dimensional matrices of numerical values,including molecula r,physiological,and anatomical datasets.We demonstrate the protocol using cellular data from the Janelia MouseLight project to chara cterize morphological aspects of neurons.
文摘Wireless Sensor Networks(WSNs)are a collection of sensor nodes distributed in space and connected through wireless communication.The sensor nodes gather and store data about the real world around them.However,the nodes that are dependent on batteries will ultimately suffer an energy loss with time,which affects the lifetime of the network.This research proposes to achieve its primary goal by reducing energy consumption and increasing the network’s lifetime and stability.The present technique employs the hybrid Mayfly Optimization Algorithm-Enhanced Ant Colony Optimization(MFOA-EACO),where the Mayfly Optimization Algorithm(MFOA)is used to select the best cluster head(CH)from a set of nodes,and the Enhanced Ant Colony Optimization(EACO)technique is used to determine an optimal route between the cluster head and base station.The performance evaluation of our suggested hybrid approach is based on many parameters,including the number of active and dead nodes,node degree,distance,and energy usage.Our objective is to integrate MFOA-EACO to enhance energy efficiency and extend the network life of the WSN in the future.The proposed method outcomes proved to be better than traditional approaches such as Hybrid Squirrel-Flying Fox Optimization Algorithm(HSFLBOA),Hybrid Social Reindeer Optimization and Differential Evolution-Firefly Algorithm(HSRODE-FFA),Social Spider Distance Sensitive-Iterative Antlion Butterfly Cockroach Algorithm(SADSS-IABCA),and Energy Efficient Clustering Hierarchy Strategy-Improved Social Spider Algorithm Differential Evolution(EECHS-ISSADE).
文摘Energy efficiency and sensing coverage are essential metrics for enhancing the lifetime and the utilization of wireless sensor networks. Many protocols have been developed to address these issues, among which, clustering is considered a key technique in minimizing the consumed energy. However, few clustering protocols address the sensing coverage metric. This paper proposes a general framework that addresses both metrics for clustering algorithms in wireless sensor networks. The proposed framework is based on applying the principles of Virtual Field Force on each cluster within the network in order to move the sensor nodes towards proper locations that maximize the sensing coverage and minimize the transmitted energy. Two types of virtual forces are used: an attractive force that moves the nodes towards the cluster head in order to reduce the energy used for communication and a repulsive force that moves the overlapping nodes away from each other such that their sensing coverage is maximized. The performance of the proposed mechanism was evaluated by applying it to the well-known LEACH clustering algorithm. The simulation results demonstrate that the proposed mechanism improves the performance of the LEACH protocol considerably in terms of the achieved sensing coverage, and the network lifetime.
文摘Through Wireless Sensor Networks(WSN)formation,industrial and academic communities have seen remarkable development in recent decades.One of the most common techniques to derive the best out of wireless sensor networks is to upgrade the operating group.The most important problem is the arrangement of optimal number of sensor nodes as clusters to discuss clustering method.In this method,new client nodes and dynamic methods are used to determine the optimal number of clusters and cluster heads which are to be better organized and proposed to classify each round.Parameters of effective energy use and the ability to decide the best method of attachments are included.The Problem coverage find change ability network route due to which traffic and delays keep the performance to be very high.A newer version of Gravity Analysis Algorithm(GAA)is used to solve this problem.This proposed new approach GAA is introduced to improve network lifetime,increase system energy efficiency and end delay performance.Simulation results show that modified GAA performance is better than other networks and it has more advanced Life Time Delay Clustering Algorithms-LTDCA protocols.The proposed method provides a set of data collection and increased throughput in wireless sensor networks.
基金sponsored by the National Natural Science Foundation of P.R.China(Nos.62102194 and 62102196)Six Talent Peaks Project of Jiangsu Province(No.RJFW-111)Postgraduate Research and Practice Innovation Program of Jiangsu Province(Nos.KYCX23_1087 and KYCX22_1027).
文摘The study delves into the expanding role of network platforms in our daily lives, encompassing various mediums like blogs, forums, online chats, and prominent social media platforms such as Facebook, Twitter, and Instagram. While these platforms offer avenues for self-expression and community support, they concurrently harbor negative impacts, fostering antisocial behaviors like phishing, impersonation, hate speech, cyberbullying, cyberstalking, cyberterrorism, fake news propagation, spamming, and fraud. Notably, individuals also leverage these platforms to connect with authorities and seek aid during disasters. The overarching objective of this research is to address the dual nature of network platforms by proposing innovative methodologies aimed at enhancing their positive aspects and mitigating their negative repercussions. To achieve this, the study introduces a weight learning method grounded in multi-linear attribute ranking. This approach serves to evaluate the significance of attribute combinations across all feature spaces. Additionally, a novel clustering method based on tensors is proposed to elevate the quality of clustering while effectively distinguishing selected features. The methodology incorporates a weighted average similarity matrix and optionally integrates weighted Euclidean distance, contributing to a more nuanced understanding of attribute importance. The analysis of the proposed methods yields significant findings. The weight learning method proves instrumental in discerning the importance of attribute combinations, shedding light on key aspects within feature spaces. Simultaneously, the clustering method based on tensors exhibits improved efficacy in enhancing clustering quality and feature distinction. This not only advances our understanding of attribute importance but also paves the way for more nuanced data analysis methodologies. In conclusion, this research underscores the pivotal role of network platforms in contemporary society, emphasizing their potential for both positive contributions and adverse consequences. The proposed methodologies offer novel approaches to address these dualities, providing a foundation for future research and practical applications. Ultimately, this study contributes to the ongoing discourse on optimizing the utility of network platforms while minimizing their negative impacts.
基金supported in part by the National Natural Science Foundation of China under Grants 62171154in part by the National Natural Science Foundation of Shandong Province under Grant ZR2020MF007+1 种基金in part by the Research Fund Program of Guangdong Key Laboratory of Aerospace Communication and Networking Technology under Grant 2018B030322004in part by the Fundamental Research Funds for the Central Universities under Grant HIT.OCEF.2023030。
文摘The limited energy and high mobility of unmanned aerial vehicles(UAVs)lead to drastic topology changes in UAV formation.The existing routing protocols necessitate a large number of messages for route discovery and maintenance,greatly increasing network delay and control overhead.A energyefficient routing method based on the discrete timeaggregated graph(TAG)theory is proposed since UAV formation is a defined time-varying network.The network is characterized using the TAG,which utilizes the prior knowledge in UAV formation.An energyefficient routing algorithm is designed based on TAG,considering the link delay,relative mobility,and residual energy of UAVs.The routing path is determined with global network information before requesting communication.Simulation results demonstrate that the routing method can improve the end-to-end delay,packet delivery ratio,routing control overhead,and residual energy.Consequently,introducing timevarying graphs to design routing algorithms is more effective for UAV formation.
基金This work was supported by Science and Technology Research Program of Chongqing Municipal Education Commission(KJZD-M202300502,KJQN201800539).
文摘In clustering algorithms,the selection of neighbors significantly affects the quality of the final clustering results.While various neighbor relationships exist,such as K-nearest neighbors,natural neighbors,and shared neighbors,most neighbor relationships can only handle single structural relationships,and the identification accuracy is low for datasets with multiple structures.In life,people’s first instinct for complex things is to divide them into multiple parts to complete.Partitioning the dataset into more sub-graphs is a good idea approach to identifying complex structures.Taking inspiration from this,we propose a novel neighbor method:Shared Natural Neighbors(SNaN).To demonstrate the superiority of this neighbor method,we propose a shared natural neighbors-based hierarchical clustering algorithm for discovering arbitrary-shaped clusters(HC-SNaN).Our algorithm excels in identifying both spherical clusters and manifold clusters.Tested on synthetic datasets and real-world datasets,HC-SNaN demonstrates significant advantages over existing clustering algorithms,particularly when dealing with datasets containing arbitrary shapes.
基金support by the National Science and Technology Council under grant no.NSTC 112-2221-E-167-017-MY3.
文摘Hotel buildings are currently among the largest energy consumers in the world.Heating,ventilation,and air conditioning are the most energy-intensive building systems,accounting for more than half of total energy consumption.An energy audit is used to predict the weak points of a building’s energy use system.Various factors influence building energy consumption,which can be modified to achieve more energy-efficient strategies.In this study,an existing hotel building in Central Taiwan is evaluated by simulating several scenarios using energy modeling over a year.Energy modeling is conducted by using Autodesk Revit 2025.It was discovered from the results that arranging the lighting schedule based on the ASHRAE Standard 90.1 could save up to 8.22%of energy consumption.And then the results also revealed that changing the glazing of the building into double-layer lowemissivity glass could reduce energy consumption by 14.58%.While the energy consumption of the building could also be decreased to 7.20%by changing the building orientation to the north.Meanwhile,moving the building location to Northern Taiwan could also minimize the energy consumption of the building by 3.23%.The results revealed that the double layer offers better thermal insulation,and low-emissivity glass can lower energy consumption,electricity costs,and CO_(2)emissions by up to 15.27%annually.While adjusting orientation and location can enhance energy performance,this approach is impractical for existing buildings,but this could be considered for designing new buildings.The results showed the relevancy of energy performance to CO_(2)emission production and electricity expenses.
基金supported in part by the National Natural Science Foundation of China under Grant 62171203in part by the Jiangsu Province“333 Project”High-Level Talent Cultivation Subsidized Project+2 种基金in part by the SuzhouKey Supporting Subjects for Health Informatics under Grant SZFCXK202147in part by the Changshu Science and Technology Program under Grants CS202015 and CS202246in part by Changshu Key Laboratory of Medical Artificial Intelligence and Big Data under Grants CYZ202301 and CS202314.
文摘In this paper,we introduce a novel Multi-scale and Auto-tuned Semi-supervised Deep Subspace Clustering(MAS-DSC)algorithm,aimed at addressing the challenges of deep subspace clustering in high-dimensional real-world data,particularly in the field of medical imaging.Traditional deep subspace clustering algorithms,which are mostly unsupervised,are limited in their ability to effectively utilize the inherent prior knowledge in medical images.Our MAS-DSC algorithm incorporates a semi-supervised learning framework that uses a small amount of labeled data to guide the clustering process,thereby enhancing the discriminative power of the feature representations.Additionally,the multi-scale feature extraction mechanism is designed to adapt to the complexity of medical imaging data,resulting in more accurate clustering performance.To address the difficulty of hyperparameter selection in deep subspace clustering,this paper employs a Bayesian optimization algorithm for adaptive tuning of hyperparameters related to subspace clustering,prior knowledge constraints,and model loss weights.Extensive experiments on standard clustering datasets,including ORL,Coil20,and Coil100,validate the effectiveness of the MAS-DSC algorithm.The results show that with its multi-scale network structure and Bayesian hyperparameter optimization,MAS-DSC achieves excellent clustering results on these datasets.Furthermore,tests on a brain tumor dataset demonstrate the robustness of the algorithm and its ability to leverage prior knowledge for efficient feature extraction and enhanced clustering performance within a semi-supervised learning framework.
基金supported by the National Key Research and Development Program of China(No.2022YFB3304400)the National Natural Science Foundation of China(Nos.6230311,62303111,62076060,61932007,and 62176083)the Key Research and Development Program of Jiangsu Province of China(No.BE2022157).
文摘Traditional Fuzzy C-Means(FCM)and Possibilistic C-Means(PCM)clustering algorithms are data-driven,and their objective function minimization process is based on the available numeric data.Recently,knowledge hints have been introduced to formknowledge-driven clustering algorithms,which reveal a data structure that considers not only the relationships between data but also the compatibility with knowledge hints.However,these algorithms cannot produce the optimal number of clusters by the clustering algorithm itself;they require the assistance of evaluation indices.Moreover,knowledge hints are usually used as part of the data structure(directly replacing some clustering centers),which severely limits the flexibility of the algorithm and can lead to knowledgemisguidance.To solve this problem,this study designs a newknowledge-driven clustering algorithmcalled the PCM clusteringwith High-density Points(HP-PCM),in which domain knowledge is represented in the form of so-called high-density points.First,a newdatadensitycalculation function is proposed.The Density Knowledge Points Extraction(DKPE)method is established to filter out high-density points from the dataset to form knowledge hints.Then,these hints are incorporated into the PCM objective function so that the clustering algorithm is guided by high-density points to discover the natural data structure.Finally,the initial number of clusters is set to be greater than the true one based on the number of knowledge hints.Then,the HP-PCM algorithm automatically determines the final number of clusters during the clustering process by considering the cluster elimination mechanism.Through experimental studies,including some comparative analyses,the results highlight the effectiveness of the proposed algorithm,such as the increased success rate in clustering,the ability to determine the optimal cluster number,and the faster convergence speed.
文摘Path-based clustering algorithms typically generate clusters by optimizing a benchmark function.Most optimiza-tion methods in clustering algorithms often offer solutions close to the general optimal value.This study achieves the global optimum value for the criterion function in a shorter time using the minimax distance,Maximum Spanning Tree“MST”,and meta-heuristic algorithms,including Genetic Algorithm“GA”and Particle Swarm Optimization“PSO”.The Fast Path-based Clustering“FPC”algorithm proposed in this paper can find cluster centers correctly in most datasets and quickly perform clustering operations.The FPC does this operation using MST,the minimax distance,and a new hybrid meta-heuristic algorithm in a few rounds of algorithm iterations.This algorithm can achieve the global optimal value,and the main clustering process of the algorithm has a computational complexity of O�k2×n�.However,due to the complexity of the minimum distance algorithm,the total computational complexity is O�n2�.Experimental results of FPC on synthetic datasets with arbitrary shapes demonstrate that the algorithm is resistant to noise and outliers and can correctly identify clusters of varying sizes and numbers.In addition,the FPC requires the number of clusters as the only parameter to perform the clustering process.A comparative analysis of FPC and other clustering algorithms in this domain indicates that FPC exhibits superior speed,stability,and performance.
基金supported by Swiss Federal Office of Transport,the ETH foundation and via the grant RAILPOWER.
文摘The reduction of energy consumption is an increasingly important topic of the railway system.Energy-efficient train control(EETC)is one solution,which refers to mathematically computing when to accelerate,which cruising speed to hold,how long one should coast over a suitable space,and when to brake.Most approaches in literature and industry greatly simplify a lot of nonlinear effects,such that they ignore mostly the losses due to energy conversion in traction components and auxiliaries.To fill this research gap,a series of increasingly detailed nonlinear losses is described and modelled.We categorize an increasing detail in this representation as four levels.We study the impact of those levels of detail on the energy optimal speed trajectory.To do this,a standard approach based on dynamic programming is used,given constraints on total travel time.This evaluation of multiple test cases highlights the influence of the dynamic losses and the power consumption of auxiliary components on railway trajectories,also compared to multiple benchmarks.The results show how the losses can make up 50%of the total energy consumption for an exemplary trip.Ignoring them would though result in consistent but limited errors in the optimal trajectory.Overall,more complex trajectories can result in less energy consumption when including the complexity of nonlinear losses than when a simpler model is considered.Those effects are stronger when the trajectory includes many acceleration and braking phases.
基金supported by the Key Research and Development Program of Hainan Province(Grant Nos.ZDYF2023GXJS163,ZDYF2024GXJS014)National Natural Science Foundation of China(NSFC)(Grant Nos.62162022,62162024)+3 种基金the Major Science and Technology Project of Hainan Province(Grant No.ZDKJ2020012)Hainan Provincial Natural Science Foundation of China(Grant No.620MS021)Youth Foundation Project of Hainan Natural Science Foundation(621QN211)Innovative Research Project for Graduate Students in Hainan Province(Grant Nos.Qhys2023-96,Qhys2023-95).
文摘Contrastive learning is a significant research direction in the field of deep learning.However,existing data augmentation methods often lead to issues such as semantic drift in generated views while the complexity of model pre-training limits further improvement in the performance of existing methods.To address these challenges,we propose the Efficient Clustering Network based on Matrix Factorization(ECN-MF).Specifically,we design a batched low-rank Singular Value Decomposition(SVD)algorithm for data augmentation to eliminate redundant information and uncover major patterns of variation and key information in the data.Additionally,we design a Mutual Information-Enhanced Clustering Module(MI-ECM)to accelerate the training process by leveraging a simple architecture to bring samples from the same cluster closer while pushing samples from other clusters apart.Extensive experiments on six datasets demonstrate that ECN-MF exhibits more effective performance compared to state-of-the-art algorithms.
基金This research was funded by the National Natural Science Foundation of China(Grant No.72001190)by the Ministry of Education’s Humanities and Social Science Project via the China Ministry of Education(Grant No.20YJC630173)by Zhejiang A&F University(Grant No.2022LFR062).
文摘Data stream clustering is integral to contemporary big data applications.However,addressing the ongoing influx of data streams efficiently and accurately remains a primary challenge in current research.This paper aims to elevate the efficiency and precision of data stream clustering,leveraging the TEDA(Typicality and Eccentricity Data Analysis)algorithm as a foundation,we introduce improvements by integrating a nearest neighbor search algorithm to enhance both the efficiency and accuracy of the algorithm.The original TEDA algorithm,grounded in the concept of“Typicality and Eccentricity Data Analytics”,represents an evolving and recursive method that requires no prior knowledge.While the algorithm autonomously creates and merges clusters as new data arrives,its efficiency is significantly hindered by the need to traverse all existing clusters upon the arrival of further data.This work presents the NS-TEDA(Neighbor Search Based Typicality and Eccentricity Data Analysis)algorithm by incorporating a KD-Tree(K-Dimensional Tree)algorithm integrated with the Scapegoat Tree.Upon arrival,this ensures that new data points interact solely with clusters in very close proximity.This significantly enhances algorithm efficiency while preventing a single data point from joining too many clusters and mitigating the merging of clusters with high overlap to some extent.We apply the NS-TEDA algorithm to several well-known datasets,comparing its performance with other data stream clustering algorithms and the original TEDA algorithm.The results demonstrate that the proposed algorithm achieves higher accuracy,and its runtime exhibits almost linear dependence on the volume of data,making it more suitable for large-scale data stream analysis research.
基金Yulin Science and Technology Bureau production Project“Research on Smart Agricultural Product Traceability System”(No.CXY-2022-64)Light of West China(No.XAB2022YN10)+1 种基金The China Postdoctoral Science Foundation(No.2023M740760)Shaanxi Province Key Research and Development Plan(No.2024SF-YBXM-678).
文摘Hyperspectral imagery encompasses spectral and spatial dimensions,reflecting the material properties of objects.Its application proves crucial in search and rescue,concealed target identification,and crop growth analysis.Clustering is an important method of hyperspectral analysis.The vast data volume of hyperspectral imagery,coupled with redundant information,poses significant challenges in swiftly and accurately extracting features for subsequent analysis.The current hyperspectral feature clustering methods,which are mostly studied from space or spectrum,do not have strong interpretability,resulting in poor comprehensibility of the algorithm.So,this research introduces a feature clustering algorithm for hyperspectral imagery from an interpretability perspective.It commences with a simulated perception process,proposing an interpretable band selection algorithm to reduce data dimensions.Following this,amulti-dimensional clustering algorithm,rooted in fuzzy and kernel clustering,is developed to highlight intra-class similarities and inter-class differences.An optimized P systemis then introduced to enhance computational efficiency.This system coordinates all cells within a mapping space to compute optimal cluster centers,facilitating parallel computation.This approach diminishes sensitivity to initial cluster centers and augments global search capabilities,thus preventing entrapment in local minima and enhancing clustering performance.Experiments conducted on 300 datasets,comprising both real and simulated data.The results show that the average accuracy(ACC)of the proposed algorithm is 0.86 and the combination measure(CM)is 0.81.
文摘Implementing machine learning algorithms in the non-conducive environment of the vehicular network requires some adaptations due to the high computational complexity of these algorithms.K-clustering algorithms are simplistic,with fast performance and relative accuracy.However,their implementation depends on the initial selection of clusters number(K),the initial clusters’centers,and the clustering metric.This paper investigated using Scott’s histogram formula to estimate the K number and the Link Expiration Time(LET)as a clustering metric.Realistic traffic flows were considered for three maps,namely Highway,Traffic Light junction,and Roundabout junction,to study the effect of road layout on estimating the K number.A fast version of the PAM algorithm was used for clustering with a modification to reduce time complexity.The Affinity propagation algorithm sets the baseline for the estimated K number,and the Medoid Silhouette method is used to quantify the clustering.OMNET++,Veins,and SUMO were used to simulate the traffic,while the related algorithms were implemented in Python.The Scott’s formula estimation of the K number only matched the baseline when the road layout was simple.Moreover,the clustering algorithm required one iteration on average to converge when used with LET.