Purpose–The purpose of this study is to study the quantitative evaluation method of contact wire cracks by analyzing the changing law of eddy current signal characteristics under different cracks of contact wire of h...Purpose–The purpose of this study is to study the quantitative evaluation method of contact wire cracks by analyzing the changing law of eddy current signal characteristics under different cracks of contact wire of high-speed railway so as to provide a new way of thinking and method for the detection of contact wire injuries of high-speed railway.Design/methodology/approach–Based on the principle of eddy current detection and the specification parameters of high-speed railway contact wires in China,a finite element model for eddy current testing of contact wires was established to explore the variation patterns of crack signal characteristics in numerical simulation.A crack detection system based on eddy current detection was built,and eddy current detection voltage data was obtained for cracks of different depths and widths.By analyzing the variation law of eddy current signals,characteristic parameters were obtained and a quantitative evaluation model for crack width and depth was established based on the back propagation(BP)neural network.Findings–Numerical simulation and experimental detection of eddy current signal change rule is basically consistent,based on the law of the selected characteristics of the parameters in the BP neural network crack quantitative evaluation model also has a certain degree of effectiveness and reliability.BP neural network training results show that the classification accuracy for different widths and depths of the classification is 100 and 85.71%,respectively,and can be effectively realized on the high-speed railway contact line cracks of the quantitative evaluation classification.Originality/value–This study establishes a new type of high-speed railway contact wire crack detection and identification method,which provides a new technical means for high-speed railway contact wire injury detection.The study of eddy current characteristic law and quantitative evaluation model for different cracks in contact line has important academic value and practical significance,and it has certain guiding significance for the detection technology of contact line in high-speed railway.展开更多
Nowadays,high mobility scenarios have become increasingly common.The widespread adoption of High-speed Rail(HSR)in China exemplifies this trend,while more promising use cases,such as vehicle-to-everything,continue to ...Nowadays,high mobility scenarios have become increasingly common.The widespread adoption of High-speed Rail(HSR)in China exemplifies this trend,while more promising use cases,such as vehicle-to-everything,continue to emerge.However,the Internet access provided in high mobility environments stllstruggles to achieve seamless connectivity.The next generation of wireless cellular technology 5 G further poses more requirements on the endto-end evolution to fully utilize its ultra-high band-width,while existing network diagnostic tools focus on above-IP layers or below-IP layers only.We then propose HiMoDiag,which enables flexible online analysis of the network performance in a cross-layer manner,i.e.,from the top(application layer)to the bottom(physical layer).We believe HiMoDiag could greatly simplify the process of pinpointing the deficiencies of the Internet access delivery on HSR,lead to more timely optimization and ultimately help to improve the network performance.展开更多
Network updates have become increasingly prevalent since the broad adoption of software-defined networks(SDNs)in data centers.Modern TCP designs,including cutting-edge TCP variants DCTCP,CUBIC,and BBR,however,are not ...Network updates have become increasingly prevalent since the broad adoption of software-defined networks(SDNs)in data centers.Modern TCP designs,including cutting-edge TCP variants DCTCP,CUBIC,and BBR,however,are not resilient to network updates that provoke flow rerouting.In this paper,we first demonstrate that popular TCP implementations perform inadequately in the presence of frequent and inconsistent network updates,because inconsistent and frequent network updates result in out-of-order packets and packet drops induced via transitory congestion and lead to serious performance deterioration.We look into the causes and propose a network update-friendly TCP(NUFTCP),which is an extension of the DCTCP variant,as a solution.Simulations are used to assess the proposed NUFTCP.Our findings reveal that NUFTCP can more effectively manage the problems of out-of-order packets and packet drops triggered in network updates,and it outperforms DCTCP considerably.展开更多
Artificial immune detection can be used to detect network intrusions in an adaptive approach and proper matching methods can improve the accuracy of immune detection methods.This paper proposes an artificial immune de...Artificial immune detection can be used to detect network intrusions in an adaptive approach and proper matching methods can improve the accuracy of immune detection methods.This paper proposes an artificial immune detection model for network intrusion data based on a quantitative matching method.The proposed model defines the detection process by using network data and decimal values to express features and artificial immune mechanisms are simulated to define immune elements.Then,to improve the accuracy of similarity calculation,a quantitative matching method is proposed.The model uses mathematical methods to train and evolve immune elements,increasing the diversity of immune recognition and allowing for the successful detection of unknown intrusions.The proposed model’s objective is to accurately identify known intrusions and expand the identification of unknown intrusions through signature detection and immune detection,overcoming the disadvantages of traditional methods.The experiment results show that the proposed model can detect intrusions effectively.It has a detection rate of more than 99.6%on average and a false alarm rate of 0.0264%.It outperforms existing immune intrusion detection methods in terms of comprehensive detection performance.展开更多
Large-scale wireless sensor networks(WSNs)play a critical role in monitoring dangerous scenarios and responding to medical emergencies.However,the inherent instability and error-prone nature of wireless links present ...Large-scale wireless sensor networks(WSNs)play a critical role in monitoring dangerous scenarios and responding to medical emergencies.However,the inherent instability and error-prone nature of wireless links present significant challenges,necessitating efficient data collection and reliable transmission services.This paper addresses the limitations of existing data transmission and recovery protocols by proposing a systematic end-to-end design tailored for medical event-driven cluster-based large-scale WSNs.The primary goal is to enhance the reliability of data collection and transmission services,ensuring a comprehensive and practical approach.Our approach focuses on refining the hop-count-based routing scheme to achieve fairness in forwarding reliability.Additionally,it emphasizes reliable data collection within clusters and establishes robust data transmission over multiple hops.These systematic improvements are designed to optimize the overall performance of the WSN in real-world scenarios.Simulation results of the proposed protocol validate its exceptional performance compared to other prominent data transmission schemes.The evaluation spans varying sensor densities,wireless channel conditions,and packet transmission rates,showcasing the protocol’s superiority in ensuring reliable and efficient data transfer.Our systematic end-to-end design successfully addresses the challenges posed by the instability of wireless links in large-scaleWSNs.By prioritizing fairness,reliability,and efficiency,the proposed protocol demonstrates its efficacy in enhancing data collection and transmission services,thereby offering a valuable contribution to the field of medical event-drivenWSNs.展开更多
The 6th generation mobile networks(6G)network is a kind of multi-network interconnection and multi-scenario coexistence network,where multiple network domains break the original fixed boundaries to form connections an...The 6th generation mobile networks(6G)network is a kind of multi-network interconnection and multi-scenario coexistence network,where multiple network domains break the original fixed boundaries to form connections and convergence.In this paper,with the optimization objective of maximizing network utility while ensuring flows performance-centric weighted fairness,this paper designs a reinforcement learning-based cloud-edge autonomous multi-domain data center network architecture that achieves single-domain autonomy and multi-domain collaboration.Due to the conflict between the utility of different flows,the bandwidth fairness allocation problem for various types of flows is formulated by considering different defined reward functions.Regarding the tradeoff between fairness and utility,this paper deals with the corresponding reward functions for the cases where the flows undergo abrupt changes and smooth changes in the flows.In addition,to accommodate the Quality of Service(QoS)requirements for multiple types of flows,this paper proposes a multi-domain autonomous routing algorithm called LSTM+MADDPG.Introducing a Long Short-Term Memory(LSTM)layer in the actor and critic networks,more information about temporal continuity is added,further enhancing the adaptive ability changes in the dynamic network environment.The LSTM+MADDPG algorithm is compared with the latest reinforcement learning algorithm by conducting experiments on real network topology and traffic traces,and the experimental results show that LSTM+MADDPG improves the delay convergence speed by 14.6%and delays the start moment of packet loss by 18.2%compared with other algorithms.展开更多
Mechanically cleaved two-dimensional materials are random in size and thickness.Recognizing atomically thin flakes by human experts is inefficient and unsuitable for scalable production.Deep learning algorithms have b...Mechanically cleaved two-dimensional materials are random in size and thickness.Recognizing atomically thin flakes by human experts is inefficient and unsuitable for scalable production.Deep learning algorithms have been adopted as an alternative,nevertheless a major challenge is a lack of sufficient actual training images.Here we report the generation of synthetic two-dimensional materials images using StyleGAN3 to complement the dataset.DeepLabv3Plus network is trained with the synthetic images which reduces overfitting and improves recognition accuracy to over 90%.A semi-supervisory technique for labeling images is introduced to reduce manual efforts.The sharper edges recognized by this method facilitate material stacking with precise edge alignment,which benefits exploring novel properties of layered-material devices that crucially depend on the interlayer twist-angle.This feasible and efficient method allows for the rapid and high-quality manufacturing of atomically thin materials and devices.展开更多
In wireless sensor networks(WSNs),the performance of related applications is highly dependent on the quality of data collected.Unfortunately,missing data is almost inevitable in the process of data acquisition and tra...In wireless sensor networks(WSNs),the performance of related applications is highly dependent on the quality of data collected.Unfortunately,missing data is almost inevitable in the process of data acquisition and transmission.Existing methods often rely on prior information such as low-rank characteristics or spatiotemporal correlation when recovering missing WSNs data.However,in realistic application scenarios,it is very difficult to obtain these prior information from incomplete data sets.Therefore,we aim to recover the missing WSNs data effectively while getting rid of the perplexity of prior information.By designing the corresponding measurement matrix that can capture the position of missing data and sparse representation matrix,a compressive sensing(CS)based missing data recovery model is established.Then,we design a comparison standard to select the best sparse representation basis and introduce average cross-correlation to examine the rationality of the established model.Furthermore,an improved fast matching pursuit algorithm is proposed to solve the model.Simulation results show that the proposed method can effectively recover the missing WSNs data.展开更多
Plant morphogenesis relies on precise gene expression programs at the proper time and position which is orchestrated by transcription factors(TFs)in intricate regulatory networks in a cell-type specific manner.Here we...Plant morphogenesis relies on precise gene expression programs at the proper time and position which is orchestrated by transcription factors(TFs)in intricate regulatory networks in a cell-type specific manner.Here we introduced a comprehensive single-cell transcriptomic atlas of Arabidopsis seedlings.This atlas is the result of meticulous integration of 63 previously published scRNA-seq datasets,addressing batch effects and conserving biological variance.This integration spans a broad spectrum of tissues,including both below-and above-ground parts.Utilizing a rigorous approach for cell type annotation,we identified 47 distinct cell types or states,largely expanding our current view of plant cell compositions.We systematically constructed cell-type specific gene regulatory networks and uncovered key regulators that act in a coordinated manner to control cell-type specific gene expression.Taken together,our study not only offers extensive plant cell atlas exploration that serves as a valuable resource,but also provides molecular insights into gene-regulatory programs that varies from different cell types.展开更多
Cloud Datacenter Network(CDN)providers usually have the option to scale their network structures to allow for far more resource capacities,though such scaling options may come with exponential costs that contradict th...Cloud Datacenter Network(CDN)providers usually have the option to scale their network structures to allow for far more resource capacities,though such scaling options may come with exponential costs that contradict their utility objectives.Yet,besides the cost of the physical assets and network resources,such scaling may also imposemore loads on the electricity power grids to feed the added nodes with the required energy to run and cool,which comes with extra costs too.Thus,those CDNproviders who utilize their resources better can certainly afford their services at lower price-units when compared to others who simply choose the scaling solutions.Resource utilization is a quite challenging process;indeed,clients of CDNs usually tend to exaggerate their true resource requirements when they lease their resources.Service providers are committed to their clients with Service Level Agreements(SLAs).Therefore,any amendment to the resource allocations needs to be approved by the clients first.In this work,we propose deploying a Stackelberg leadership framework to formulate a negotiation game between the cloud service providers and their client tenants.Through this,the providers seek to retrieve those leased unused resources from their clients.Cooperation is not expected from the clients,and they may ask high price units to return their extra resources to the provider’s premises.Hence,to motivate cooperation in such a non-cooperative game,as an extension to theVickery auctions,we developed an incentive-compatible pricingmodel for the returned resources.Moreover,we also proposed building a behavior belief function that shapes the way of negotiation and compensation for each client.Compared to other benchmark models,the assessment results showthat our proposed models provide for timely negotiation schemes,allowing for better resource utilization rates,higher utilities,and grid-friend CDNs.展开更多
Bayesian networks are a powerful class of graphical decision models used to represent causal relationships among variables.However,the reliability and integrity of learned Bayesian network models are highly dependent ...Bayesian networks are a powerful class of graphical decision models used to represent causal relationships among variables.However,the reliability and integrity of learned Bayesian network models are highly dependent on the quality of incoming data streams.One of the primary challenges with Bayesian networks is their vulnerability to adversarial data poisoning attacks,wherein malicious data is injected into the training dataset to negatively influence the Bayesian network models and impair their performance.In this research paper,we propose an efficient framework for detecting data poisoning attacks against Bayesian network structure learning algorithms.Our framework utilizes latent variables to quantify the amount of belief between every two nodes in each causal model over time.We use our innovative methodology to tackle an important issue with data poisoning assaults in the context of Bayesian networks.With regard to four different forms of data poisoning attacks,we specifically aim to strengthen the security and dependability of Bayesian network structure learning techniques,such as the PC algorithm.By doing this,we explore the complexity of this area and offer workablemethods for identifying and reducing these sneaky dangers.Additionally,our research investigates one particular use case,the“Visit to Asia Network.”The practical consequences of using uncertainty as a way to spot cases of data poisoning are explored in this inquiry,which is of utmost relevance.Our results demonstrate the promising efficacy of latent variables in detecting and mitigating the threat of data poisoning attacks.Additionally,our proposed latent-based framework proves to be sensitive in detecting malicious data poisoning attacks in the context of stream data.展开更多
Wireless Sensor Network(WSN)is widely utilized in large-scale distributed unmanned detection scenarios due to its low cost and flexible installation.However,WSN data collection encounters challenges in scenarios lacki...Wireless Sensor Network(WSN)is widely utilized in large-scale distributed unmanned detection scenarios due to its low cost and flexible installation.However,WSN data collection encounters challenges in scenarios lacking communication infrastructure.Unmanned aerial vehicle(UAV)offers a novel solution for WSN data collection,leveraging their high mobility.In this paper,we present an efficient UAV-assisted data collection algorithm aimed at minimizing the overall power consumption of the WSN.Firstly,a two-layer UAV-assisted data collection model is introduced,including the ground and aerial layers.The ground layer senses the environmental data by the cluster members(CMs),and the CMs transmit the data to the cluster heads(CHs),which forward the collected data to the UAVs.The aerial network layer consists of multiple UAVs that collect,store,and forward data from the CHs to the data center for analysis.Secondly,an improved clustering algorithm based on K-Means++is proposed to optimize the number and locations of CHs.Moreover,an Actor-Critic based algorithm is introduced to optimize the UAV deployment and the association with CHs.Finally,simulation results verify the effectiveness of the proposed algorithms.展开更多
Early and timely diagnosis of stroke is critical for effective treatment,and the electroencephalogram(EEG)offers a low-cost,non-invasive solution.However,the shortage of high-quality patient EEG data often hampers the...Early and timely diagnosis of stroke is critical for effective treatment,and the electroencephalogram(EEG)offers a low-cost,non-invasive solution.However,the shortage of high-quality patient EEG data often hampers the accuracy of diagnostic classification methods based on deep learning.To address this issue,our study designed a deep data amplification model named Progressive Conditional Generative Adversarial Network with Efficient Approximating Self Attention(PCGAN-EASA),which incrementally improves the quality of generated EEG features.This network can yield full-scale,fine-grained EEG features from the low-scale,coarse ones.Specially,to overcome the limitations of traditional generative models that fail to generate features tailored to individual patient characteristics,we developed an encoder with an effective approximating self-attention mechanism.This encoder not only automatically extracts relevant features across different patients but also reduces the computational resource consumption.Furthermore,the adversarial loss and reconstruction loss functions were redesigned to better align with the training characteristics of the network and the spatial correlations among electrodes.Extensive experimental results demonstrate that PCGAN-EASA provides the highest generation quality and the lowest computational resource usage compared to several existing approaches.Additionally,it significantly improves the accuracy of subsequent stroke classification tasks.展开更多
Automatic modulation recognition(AMR)of radiation source signals is a research focus in the field of cognitive radio.However,the AMR of radiation source signals at low SNRs still faces a great challenge.Therefore,the ...Automatic modulation recognition(AMR)of radiation source signals is a research focus in the field of cognitive radio.However,the AMR of radiation source signals at low SNRs still faces a great challenge.Therefore,the AMR method of radiation source signals based on two-dimensional data matrix and improved residual neural network is proposed in this paper.First,the time series of the radiation source signals are reconstructed into two-dimensional data matrix,which greatly simplifies the signal preprocessing process.Second,the depthwise convolution and large-size convolutional kernels based residual neural network(DLRNet)is proposed to improve the feature extraction capability of the AMR model.Finally,the model performs feature extraction and classification on the two-dimensional data matrix to obtain the recognition vector that represents the signal modulation type.Theoretical analysis and simulation results show that the AMR method based on two-dimensional data matrix and improved residual network can significantly improve the accuracy of the AMR method.The recognition accuracy of the proposed method maintains a high level greater than 90% even at -14 dB SNR.展开更多
Ocean temperature is an important physical variable in marine ecosystems,and ocean temperature prediction is an important research objective in ocean-related fields.Currently,one of the commonly used methods for ocean...Ocean temperature is an important physical variable in marine ecosystems,and ocean temperature prediction is an important research objective in ocean-related fields.Currently,one of the commonly used methods for ocean temperature prediction is based on data-driven,but research on this method is mostly limited to the sea surface,with few studies on the prediction of internal ocean temperature.Existing graph neural network-based methods usually use predefined graphs or learned static graphs,which cannot capture the dynamic associations among data.In this study,we propose a novel dynamic spatiotemporal graph neural network(DSTGN)to predict threedimensional ocean temperature(3D-OT),which combines static graph learning and dynamic graph learning to automatically mine two unknown dependencies between sequences based on the original 3D-OT data without prior knowledge.Temporal and spatial dependencies in the time series were then captured using temporal and graph convolutions.We also integrated dynamic graph learning,static graph learning,graph convolution,and temporal convolution into an end-to-end framework for 3D-OT prediction using time-series grid data.In this study,we conducted prediction experiments using high-resolution 3D-OT from the Copernicus global ocean physical reanalysis,with data covering the vertical variation of temperature from the sea surface to 1000 m below the sea surface.We compared five mainstream models that are commonly used for ocean temperature prediction,and the results showed that the method achieved the best prediction results at all prediction scales.展开更多
The traditional air traffic control information sharing data has weak security characteristics of personal privacy data and poor effect,which is easy to leads to the problem that the data is usurped.Starting from the ...The traditional air traffic control information sharing data has weak security characteristics of personal privacy data and poor effect,which is easy to leads to the problem that the data is usurped.Starting from the application of the ATC(automatic train control)network,this paper focuses on the zero trust and zero trust access strategy and the tamper-proof method of information-sharing network data.Through the improvement of ATC’s zero trust physical layer authentication and network data distributed feature differentiation calculation,this paper reconstructs the personal privacy scope authentication structure and designs a tamper-proof method of ATC’s information sharing on the Internet.From the single management authority to the unified management of data units,the systematic algorithm improvement of shared network data tamper prevention method is realized,and RDTP(Reliable Data Transfer Protocol)is selected in the network data of information sharing resources to realize the effectiveness of tamper prevention of air traffic control data during transmission.The results show that this method can reasonably avoid the tampering of information sharing on the Internet,maintain the security factors of air traffic control information sharing on the Internet,and the Central Processing Unit(CPU)utilization rate is only 4.64%,which effectively increases the performance of air traffic control data comprehensive security protection system.展开更多
The use of the Internet of Things(IoT)is expanding at an unprecedented scale in many critical applications due to the ability to interconnect and utilize a plethora of wide range of devices.In critical infrastructure ...The use of the Internet of Things(IoT)is expanding at an unprecedented scale in many critical applications due to the ability to interconnect and utilize a plethora of wide range of devices.In critical infrastructure domains like oil and gas supply,intelligent transportation,power grids,and autonomous agriculture,it is essential to guarantee the confidentiality,integrity,and authenticity of data collected and exchanged.However,the limited resources coupled with the heterogeneity of IoT devices make it inefficient or sometimes infeasible to achieve secure data transmission using traditional cryptographic techniques.Consequently,designing a lightweight secure data transmission scheme is becoming essential.In this article,we propose lightweight secure data transmission(LSDT)scheme for IoT environments.LSDT consists of three phases and utilizes an effective combination of symmetric keys and the Elliptic Curve Menezes-Qu-Vanstone asymmetric key agreement protocol.We design the simulation environment and experiments to evaluate the performance of the LSDT scheme in terms of communication and computation costs.Security and performance analysis indicates that the LSDT scheme is secure,suitable for IoT applications,and performs better in comparison to other related security schemes.展开更多
The inter-city linkage heat data provided by Baidu Migration is employed as a characterization of inter-city linkages in order to facilitate the study of the network linkage characteristics and hierarchical structure ...The inter-city linkage heat data provided by Baidu Migration is employed as a characterization of inter-city linkages in order to facilitate the study of the network linkage characteristics and hierarchical structure of urban agglomeration in the Greater Bay Area through the use of social network analysis method.This is the inaugural application of big data based on location services in the study of urban agglomeration network structure,which represents a novel research perspective on this topic.The study reveals that the density of network linkages in the Greater Bay Area urban agglomeration has reached 100%,indicating a mature network-like spatial structure.This structure has given rise to three distinct communities:Shenzhen-Dongguan-Huizhou,Guangzhou-Foshan-Zhaoqing,and Zhuhai-Zhongshan-Jiangmen.Additionally,cities within the Greater Bay Area urban agglomeration play different roles,suggesting that varying development strategies may be necessary to achieve staggered development.The study demonstrates that large datasets represented by LBS can offer novel insights and methodologies for the examination of urban agglomeration network structures,contingent on the appropriate mining and processing of the data.展开更多
Background:Diabetic retinopathy(DR)is currently the leading cause of blindness in elderly individuals with diabetes.Traditional Chinese medicine(TCM)prescriptions have shown remarkable effectiveness for treating DR.Th...Background:Diabetic retinopathy(DR)is currently the leading cause of blindness in elderly individuals with diabetes.Traditional Chinese medicine(TCM)prescriptions have shown remarkable effectiveness for treating DR.This study aimed to screen a novel TCM prescription against DR from patents and elucidate its medication rule and molecular mechanism using data mining,network pharmacology,molecular docking and molecular dynamics(MD)simulation.Method:TCM prescriptions for treating DR was collected from patents and a novel TCM prescription was identified using data mining.Subsequently,the mechanism of the novel TCM prescription against DR was explored by constructing a network of core TCMs-core active ingredients-core targets-core pathways.Finally,molecular docking and MD simulation were employed to validate the findings from network pharmacology.Result:The TCMs of the collected prescriptions primarily possessed bitter and cold properties with heat-clearing and supplementing effects,attributed to the liver,lung and kidney channels.Notably,a novel TCM prescription for treating DR was identified,composed of Lycii Fructus,Chrysanthemi Flos,Astragali Radix and Angelicae Sinensis Radix.Twenty core active ingredients and ten core targets of the novel TCM prescription for treating DR were screened.Moreover,the novel TCM prescription played a crucial role for treating DR by inhibiting inflammatory response,oxidative stress,retinal pigment epithelium cell apoptosis and retinal neovascularization through various pathways,such as the AGE-RAGE signaling pathway in diabetic complications and the MAPK signaling pathway.Finally,molecular docking and MD simulation demonstrated that almost all core active ingredients exhibited satisfactory binding energies to core targets.Conclusions:This study identified a novel TCM prescription and unveiled its multi-component,multi-target and multi-pathway characteristics for treating DR.These findings provide a scientific basis and novel insights into the development of drugs for DR prevention and treatment.展开更多
This study aims to explore the application of Bayesian analysis based on neural networks and deep learning in data visualization.The research background is that with the increasing amount and complexity of data,tradit...This study aims to explore the application of Bayesian analysis based on neural networks and deep learning in data visualization.The research background is that with the increasing amount and complexity of data,traditional data analysis methods have been unable to meet the needs.Research methods include building neural networks and deep learning models,optimizing and improving them through Bayesian analysis,and applying them to the visualization of large-scale data sets.The results show that the neural network combined with Bayesian analysis and deep learning method can effectively improve the accuracy and efficiency of data visualization,and enhance the intuitiveness and depth of data interpretation.The significance of the research is that it provides a new solution for data visualization in the big data environment and helps to further promote the development and application of data science.展开更多
文摘Purpose–The purpose of this study is to study the quantitative evaluation method of contact wire cracks by analyzing the changing law of eddy current signal characteristics under different cracks of contact wire of high-speed railway so as to provide a new way of thinking and method for the detection of contact wire injuries of high-speed railway.Design/methodology/approach–Based on the principle of eddy current detection and the specification parameters of high-speed railway contact wires in China,a finite element model for eddy current testing of contact wires was established to explore the variation patterns of crack signal characteristics in numerical simulation.A crack detection system based on eddy current detection was built,and eddy current detection voltage data was obtained for cracks of different depths and widths.By analyzing the variation law of eddy current signals,characteristic parameters were obtained and a quantitative evaluation model for crack width and depth was established based on the back propagation(BP)neural network.Findings–Numerical simulation and experimental detection of eddy current signal change rule is basically consistent,based on the law of the selected characteristics of the parameters in the BP neural network crack quantitative evaluation model also has a certain degree of effectiveness and reliability.BP neural network training results show that the classification accuracy for different widths and depths of the classification is 100 and 85.71%,respectively,and can be effectively realized on the high-speed railway contact line cracks of the quantitative evaluation classification.Originality/value–This study establishes a new type of high-speed railway contact wire crack detection and identification method,which provides a new technical means for high-speed railway contact wire injury detection.The study of eddy current characteristic law and quantitative evaluation model for different cracks in contact line has important academic value and practical significance,and it has certain guiding significance for the detection technology of contact line in high-speed railway.
基金supported by National Key Research and Development Plan,China(Grant No.2020YFB1710900)National Natural Science Foundation of China(Grant No.62022005 and 62172008).
文摘Nowadays,high mobility scenarios have become increasingly common.The widespread adoption of High-speed Rail(HSR)in China exemplifies this trend,while more promising use cases,such as vehicle-to-everything,continue to emerge.However,the Internet access provided in high mobility environments stllstruggles to achieve seamless connectivity.The next generation of wireless cellular technology 5 G further poses more requirements on the endto-end evolution to fully utilize its ultra-high band-width,while existing network diagnostic tools focus on above-IP layers or below-IP layers only.We then propose HiMoDiag,which enables flexible online analysis of the network performance in a cross-layer manner,i.e.,from the top(application layer)to the bottom(physical layer).We believe HiMoDiag could greatly simplify the process of pinpointing the deficiencies of the Internet access delivery on HSR,lead to more timely optimization and ultimately help to improve the network performance.
基金supportted by the King Khalid University through the Large Group Project(No.RGP.2/312/44).
文摘Network updates have become increasingly prevalent since the broad adoption of software-defined networks(SDNs)in data centers.Modern TCP designs,including cutting-edge TCP variants DCTCP,CUBIC,and BBR,however,are not resilient to network updates that provoke flow rerouting.In this paper,we first demonstrate that popular TCP implementations perform inadequately in the presence of frequent and inconsistent network updates,because inconsistent and frequent network updates result in out-of-order packets and packet drops induced via transitory congestion and lead to serious performance deterioration.We look into the causes and propose a network update-friendly TCP(NUFTCP),which is an extension of the DCTCP variant,as a solution.Simulations are used to assess the proposed NUFTCP.Our findings reveal that NUFTCP can more effectively manage the problems of out-of-order packets and packet drops triggered in network updates,and it outperforms DCTCP considerably.
基金This research was funded by the Scientific Research Project of Leshan Normal University(No.2022SSDX002)the Scientific Plan Project of Leshan(No.22NZD012).
文摘Artificial immune detection can be used to detect network intrusions in an adaptive approach and proper matching methods can improve the accuracy of immune detection methods.This paper proposes an artificial immune detection model for network intrusion data based on a quantitative matching method.The proposed model defines the detection process by using network data and decimal values to express features and artificial immune mechanisms are simulated to define immune elements.Then,to improve the accuracy of similarity calculation,a quantitative matching method is proposed.The model uses mathematical methods to train and evolve immune elements,increasing the diversity of immune recognition and allowing for the successful detection of unknown intrusions.The proposed model’s objective is to accurately identify known intrusions and expand the identification of unknown intrusions through signature detection and immune detection,overcoming the disadvantages of traditional methods.The experiment results show that the proposed model can detect intrusions effectively.It has a detection rate of more than 99.6%on average and a false alarm rate of 0.0264%.It outperforms existing immune intrusion detection methods in terms of comprehensive detection performance.
文摘Large-scale wireless sensor networks(WSNs)play a critical role in monitoring dangerous scenarios and responding to medical emergencies.However,the inherent instability and error-prone nature of wireless links present significant challenges,necessitating efficient data collection and reliable transmission services.This paper addresses the limitations of existing data transmission and recovery protocols by proposing a systematic end-to-end design tailored for medical event-driven cluster-based large-scale WSNs.The primary goal is to enhance the reliability of data collection and transmission services,ensuring a comprehensive and practical approach.Our approach focuses on refining the hop-count-based routing scheme to achieve fairness in forwarding reliability.Additionally,it emphasizes reliable data collection within clusters and establishes robust data transmission over multiple hops.These systematic improvements are designed to optimize the overall performance of the WSN in real-world scenarios.Simulation results of the proposed protocol validate its exceptional performance compared to other prominent data transmission schemes.The evaluation spans varying sensor densities,wireless channel conditions,and packet transmission rates,showcasing the protocol’s superiority in ensuring reliable and efficient data transfer.Our systematic end-to-end design successfully addresses the challenges posed by the instability of wireless links in large-scaleWSNs.By prioritizing fairness,reliability,and efficiency,the proposed protocol demonstrates its efficacy in enhancing data collection and transmission services,thereby offering a valuable contribution to the field of medical event-drivenWSNs.
文摘The 6th generation mobile networks(6G)network is a kind of multi-network interconnection and multi-scenario coexistence network,where multiple network domains break the original fixed boundaries to form connections and convergence.In this paper,with the optimization objective of maximizing network utility while ensuring flows performance-centric weighted fairness,this paper designs a reinforcement learning-based cloud-edge autonomous multi-domain data center network architecture that achieves single-domain autonomy and multi-domain collaboration.Due to the conflict between the utility of different flows,the bandwidth fairness allocation problem for various types of flows is formulated by considering different defined reward functions.Regarding the tradeoff between fairness and utility,this paper deals with the corresponding reward functions for the cases where the flows undergo abrupt changes and smooth changes in the flows.In addition,to accommodate the Quality of Service(QoS)requirements for multiple types of flows,this paper proposes a multi-domain autonomous routing algorithm called LSTM+MADDPG.Introducing a Long Short-Term Memory(LSTM)layer in the actor and critic networks,more information about temporal continuity is added,further enhancing the adaptive ability changes in the dynamic network environment.The LSTM+MADDPG algorithm is compared with the latest reinforcement learning algorithm by conducting experiments on real network topology and traffic traces,and the experimental results show that LSTM+MADDPG improves the delay convergence speed by 14.6%and delays the start moment of packet loss by 18.2%compared with other algorithms.
基金Project supported by the National Key Research and Development Program of China(Grant No.2022YFB2803900)the National Natural Science Foundation of China(Grant Nos.61974075 and 61704121)+2 种基金the Natural Science Foundation of Tianjin Municipality(Grant Nos.22JCZDJC00460 and 19JCQNJC00700)Tianjin Municipal Education Commission(Grant No.2019KJ028)Fundamental Research Funds for the Central Universities(Grant No.22JCZDJC00460).
文摘Mechanically cleaved two-dimensional materials are random in size and thickness.Recognizing atomically thin flakes by human experts is inefficient and unsuitable for scalable production.Deep learning algorithms have been adopted as an alternative,nevertheless a major challenge is a lack of sufficient actual training images.Here we report the generation of synthetic two-dimensional materials images using StyleGAN3 to complement the dataset.DeepLabv3Plus network is trained with the synthetic images which reduces overfitting and improves recognition accuracy to over 90%.A semi-supervisory technique for labeling images is introduced to reduce manual efforts.The sharper edges recognized by this method facilitate material stacking with precise edge alignment,which benefits exploring novel properties of layered-material devices that crucially depend on the interlayer twist-angle.This feasible and efficient method allows for the rapid and high-quality manufacturing of atomically thin materials and devices.
基金supported by the National Natural Science Foundation of China(No.61871400)the Natural Science Foundation of the Jiangsu Province of China(No.BK20171401)。
文摘In wireless sensor networks(WSNs),the performance of related applications is highly dependent on the quality of data collected.Unfortunately,missing data is almost inevitable in the process of data acquisition and transmission.Existing methods often rely on prior information such as low-rank characteristics or spatiotemporal correlation when recovering missing WSNs data.However,in realistic application scenarios,it is very difficult to obtain these prior information from incomplete data sets.Therefore,we aim to recover the missing WSNs data effectively while getting rid of the perplexity of prior information.By designing the corresponding measurement matrix that can capture the position of missing data and sparse representation matrix,a compressive sensing(CS)based missing data recovery model is established.Then,we design a comparison standard to select the best sparse representation basis and introduce average cross-correlation to examine the rationality of the established model.Furthermore,an improved fast matching pursuit algorithm is proposed to solve the model.Simulation results show that the proposed method can effectively recover the missing WSNs data.
基金supported by the National Natural Science Foundation of China (No.32070656)the Nanjing University Deng Feng Scholars Program+1 种基金the Priority Academic Program Development (PAPD) of Jiangsu Higher Education Institutions,China Postdoctoral Science Foundation funded project (No.2022M711563)Jiangsu Funding Program for Excellent Postdoctoral Talent (No.2022ZB50)
文摘Plant morphogenesis relies on precise gene expression programs at the proper time and position which is orchestrated by transcription factors(TFs)in intricate regulatory networks in a cell-type specific manner.Here we introduced a comprehensive single-cell transcriptomic atlas of Arabidopsis seedlings.This atlas is the result of meticulous integration of 63 previously published scRNA-seq datasets,addressing batch effects and conserving biological variance.This integration spans a broad spectrum of tissues,including both below-and above-ground parts.Utilizing a rigorous approach for cell type annotation,we identified 47 distinct cell types or states,largely expanding our current view of plant cell compositions.We systematically constructed cell-type specific gene regulatory networks and uncovered key regulators that act in a coordinated manner to control cell-type specific gene expression.Taken together,our study not only offers extensive plant cell atlas exploration that serves as a valuable resource,but also provides molecular insights into gene-regulatory programs that varies from different cell types.
基金The Deanship of Scientific Research at Hashemite University partially funds this workDeanship of Scientific Research at the Northern Border University,Arar,KSA for funding this research work through the project number“NBU-FFR-2024-1580-08”.
文摘Cloud Datacenter Network(CDN)providers usually have the option to scale their network structures to allow for far more resource capacities,though such scaling options may come with exponential costs that contradict their utility objectives.Yet,besides the cost of the physical assets and network resources,such scaling may also imposemore loads on the electricity power grids to feed the added nodes with the required energy to run and cool,which comes with extra costs too.Thus,those CDNproviders who utilize their resources better can certainly afford their services at lower price-units when compared to others who simply choose the scaling solutions.Resource utilization is a quite challenging process;indeed,clients of CDNs usually tend to exaggerate their true resource requirements when they lease their resources.Service providers are committed to their clients with Service Level Agreements(SLAs).Therefore,any amendment to the resource allocations needs to be approved by the clients first.In this work,we propose deploying a Stackelberg leadership framework to formulate a negotiation game between the cloud service providers and their client tenants.Through this,the providers seek to retrieve those leased unused resources from their clients.Cooperation is not expected from the clients,and they may ask high price units to return their extra resources to the provider’s premises.Hence,to motivate cooperation in such a non-cooperative game,as an extension to theVickery auctions,we developed an incentive-compatible pricingmodel for the returned resources.Moreover,we also proposed building a behavior belief function that shapes the way of negotiation and compensation for each client.Compared to other benchmark models,the assessment results showthat our proposed models provide for timely negotiation schemes,allowing for better resource utilization rates,higher utilities,and grid-friend CDNs.
文摘Bayesian networks are a powerful class of graphical decision models used to represent causal relationships among variables.However,the reliability and integrity of learned Bayesian network models are highly dependent on the quality of incoming data streams.One of the primary challenges with Bayesian networks is their vulnerability to adversarial data poisoning attacks,wherein malicious data is injected into the training dataset to negatively influence the Bayesian network models and impair their performance.In this research paper,we propose an efficient framework for detecting data poisoning attacks against Bayesian network structure learning algorithms.Our framework utilizes latent variables to quantify the amount of belief between every two nodes in each causal model over time.We use our innovative methodology to tackle an important issue with data poisoning assaults in the context of Bayesian networks.With regard to four different forms of data poisoning attacks,we specifically aim to strengthen the security and dependability of Bayesian network structure learning techniques,such as the PC algorithm.By doing this,we explore the complexity of this area and offer workablemethods for identifying and reducing these sneaky dangers.Additionally,our research investigates one particular use case,the“Visit to Asia Network.”The practical consequences of using uncertainty as a way to spot cases of data poisoning are explored in this inquiry,which is of utmost relevance.Our results demonstrate the promising efficacy of latent variables in detecting and mitigating the threat of data poisoning attacks.Additionally,our proposed latent-based framework proves to be sensitive in detecting malicious data poisoning attacks in the context of stream data.
基金supported by the National Natural Science Foundation of China(NSFC)(61831002,62001076)the General Program of Natural Science Foundation of Chongqing(No.CSTB2023NSCQ-MSX0726,No.cstc2020jcyjmsxmX0878).
文摘Wireless Sensor Network(WSN)is widely utilized in large-scale distributed unmanned detection scenarios due to its low cost and flexible installation.However,WSN data collection encounters challenges in scenarios lacking communication infrastructure.Unmanned aerial vehicle(UAV)offers a novel solution for WSN data collection,leveraging their high mobility.In this paper,we present an efficient UAV-assisted data collection algorithm aimed at minimizing the overall power consumption of the WSN.Firstly,a two-layer UAV-assisted data collection model is introduced,including the ground and aerial layers.The ground layer senses the environmental data by the cluster members(CMs),and the CMs transmit the data to the cluster heads(CHs),which forward the collected data to the UAVs.The aerial network layer consists of multiple UAVs that collect,store,and forward data from the CHs to the data center for analysis.Secondly,an improved clustering algorithm based on K-Means++is proposed to optimize the number and locations of CHs.Moreover,an Actor-Critic based algorithm is introduced to optimize the UAV deployment and the association with CHs.Finally,simulation results verify the effectiveness of the proposed algorithms.
基金supported by the General Program under grant funded by the National Natural Science Foundation of China(NSFC)(No.62171307)the Basic Research Program of Shanxi Province under grant funded by the Department of Science and Technology of Shanxi Province(China)(No.202103021224113).
文摘Early and timely diagnosis of stroke is critical for effective treatment,and the electroencephalogram(EEG)offers a low-cost,non-invasive solution.However,the shortage of high-quality patient EEG data often hampers the accuracy of diagnostic classification methods based on deep learning.To address this issue,our study designed a deep data amplification model named Progressive Conditional Generative Adversarial Network with Efficient Approximating Self Attention(PCGAN-EASA),which incrementally improves the quality of generated EEG features.This network can yield full-scale,fine-grained EEG features from the low-scale,coarse ones.Specially,to overcome the limitations of traditional generative models that fail to generate features tailored to individual patient characteristics,we developed an encoder with an effective approximating self-attention mechanism.This encoder not only automatically extracts relevant features across different patients but also reduces the computational resource consumption.Furthermore,the adversarial loss and reconstruction loss functions were redesigned to better align with the training characteristics of the network and the spatial correlations among electrodes.Extensive experimental results demonstrate that PCGAN-EASA provides the highest generation quality and the lowest computational resource usage compared to several existing approaches.Additionally,it significantly improves the accuracy of subsequent stroke classification tasks.
基金National Natural Science Foundation of China under Grant No.61973037China Postdoctoral Science Foundation under Grant No.2022M720419。
文摘Automatic modulation recognition(AMR)of radiation source signals is a research focus in the field of cognitive radio.However,the AMR of radiation source signals at low SNRs still faces a great challenge.Therefore,the AMR method of radiation source signals based on two-dimensional data matrix and improved residual neural network is proposed in this paper.First,the time series of the radiation source signals are reconstructed into two-dimensional data matrix,which greatly simplifies the signal preprocessing process.Second,the depthwise convolution and large-size convolutional kernels based residual neural network(DLRNet)is proposed to improve the feature extraction capability of the AMR model.Finally,the model performs feature extraction and classification on the two-dimensional data matrix to obtain the recognition vector that represents the signal modulation type.Theoretical analysis and simulation results show that the AMR method based on two-dimensional data matrix and improved residual network can significantly improve the accuracy of the AMR method.The recognition accuracy of the proposed method maintains a high level greater than 90% even at -14 dB SNR.
基金The National Key R&D Program of China under contract No.2021YFC3101603.
文摘Ocean temperature is an important physical variable in marine ecosystems,and ocean temperature prediction is an important research objective in ocean-related fields.Currently,one of the commonly used methods for ocean temperature prediction is based on data-driven,but research on this method is mostly limited to the sea surface,with few studies on the prediction of internal ocean temperature.Existing graph neural network-based methods usually use predefined graphs or learned static graphs,which cannot capture the dynamic associations among data.In this study,we propose a novel dynamic spatiotemporal graph neural network(DSTGN)to predict threedimensional ocean temperature(3D-OT),which combines static graph learning and dynamic graph learning to automatically mine two unknown dependencies between sequences based on the original 3D-OT data without prior knowledge.Temporal and spatial dependencies in the time series were then captured using temporal and graph convolutions.We also integrated dynamic graph learning,static graph learning,graph convolution,and temporal convolution into an end-to-end framework for 3D-OT prediction using time-series grid data.In this study,we conducted prediction experiments using high-resolution 3D-OT from the Copernicus global ocean physical reanalysis,with data covering the vertical variation of temperature from the sea surface to 1000 m below the sea surface.We compared five mainstream models that are commonly used for ocean temperature prediction,and the results showed that the method achieved the best prediction results at all prediction scales.
基金This work was supported by National Natural Science Foundation of China(U2133208,U20A20161).
文摘The traditional air traffic control information sharing data has weak security characteristics of personal privacy data and poor effect,which is easy to leads to the problem that the data is usurped.Starting from the application of the ATC(automatic train control)network,this paper focuses on the zero trust and zero trust access strategy and the tamper-proof method of information-sharing network data.Through the improvement of ATC’s zero trust physical layer authentication and network data distributed feature differentiation calculation,this paper reconstructs the personal privacy scope authentication structure and designs a tamper-proof method of ATC’s information sharing on the Internet.From the single management authority to the unified management of data units,the systematic algorithm improvement of shared network data tamper prevention method is realized,and RDTP(Reliable Data Transfer Protocol)is selected in the network data of information sharing resources to realize the effectiveness of tamper prevention of air traffic control data during transmission.The results show that this method can reasonably avoid the tampering of information sharing on the Internet,maintain the security factors of air traffic control information sharing on the Internet,and the Central Processing Unit(CPU)utilization rate is only 4.64%,which effectively increases the performance of air traffic control data comprehensive security protection system.
基金support of the Interdisciplinary Research Center for Intelligent Secure Systems(IRC-ISS)Internal Fund Grant#INSS2202.
文摘The use of the Internet of Things(IoT)is expanding at an unprecedented scale in many critical applications due to the ability to interconnect and utilize a plethora of wide range of devices.In critical infrastructure domains like oil and gas supply,intelligent transportation,power grids,and autonomous agriculture,it is essential to guarantee the confidentiality,integrity,and authenticity of data collected and exchanged.However,the limited resources coupled with the heterogeneity of IoT devices make it inefficient or sometimes infeasible to achieve secure data transmission using traditional cryptographic techniques.Consequently,designing a lightweight secure data transmission scheme is becoming essential.In this article,we propose lightweight secure data transmission(LSDT)scheme for IoT environments.LSDT consists of three phases and utilizes an effective combination of symmetric keys and the Elliptic Curve Menezes-Qu-Vanstone asymmetric key agreement protocol.We design the simulation environment and experiments to evaluate the performance of the LSDT scheme in terms of communication and computation costs.Security and performance analysis indicates that the LSDT scheme is secure,suitable for IoT applications,and performs better in comparison to other related security schemes.
文摘The inter-city linkage heat data provided by Baidu Migration is employed as a characterization of inter-city linkages in order to facilitate the study of the network linkage characteristics and hierarchical structure of urban agglomeration in the Greater Bay Area through the use of social network analysis method.This is the inaugural application of big data based on location services in the study of urban agglomeration network structure,which represents a novel research perspective on this topic.The study reveals that the density of network linkages in the Greater Bay Area urban agglomeration has reached 100%,indicating a mature network-like spatial structure.This structure has given rise to three distinct communities:Shenzhen-Dongguan-Huizhou,Guangzhou-Foshan-Zhaoqing,and Zhuhai-Zhongshan-Jiangmen.Additionally,cities within the Greater Bay Area urban agglomeration play different roles,suggesting that varying development strategies may be necessary to achieve staggered development.The study demonstrates that large datasets represented by LBS can offer novel insights and methodologies for the examination of urban agglomeration network structures,contingent on the appropriate mining and processing of the data.
基金supported by the National Natural Science Foundation of China(Grant No.82104701)Science Fund Program for Outstanding Young Scholars in Universities of Anhui Province(Grant No.2022AH030064)+3 种基金Key Project at Central Government Level:the Ability Establishment of Sustainable Use for Valuable Chinese Medicine Resources(Grant No.2060302)Foundation of Anhui Province Key Laboratory of Pharmaceutical Preparation Technology and Application(Grant No.2021KFKT10)China Agriculture Research System of MOF and MARA(Grant No.CARS-21)Talent Support Program of Anhui University of Chinese Medicine(Grant No.2020rcyb007).
文摘Background:Diabetic retinopathy(DR)is currently the leading cause of blindness in elderly individuals with diabetes.Traditional Chinese medicine(TCM)prescriptions have shown remarkable effectiveness for treating DR.This study aimed to screen a novel TCM prescription against DR from patents and elucidate its medication rule and molecular mechanism using data mining,network pharmacology,molecular docking and molecular dynamics(MD)simulation.Method:TCM prescriptions for treating DR was collected from patents and a novel TCM prescription was identified using data mining.Subsequently,the mechanism of the novel TCM prescription against DR was explored by constructing a network of core TCMs-core active ingredients-core targets-core pathways.Finally,molecular docking and MD simulation were employed to validate the findings from network pharmacology.Result:The TCMs of the collected prescriptions primarily possessed bitter and cold properties with heat-clearing and supplementing effects,attributed to the liver,lung and kidney channels.Notably,a novel TCM prescription for treating DR was identified,composed of Lycii Fructus,Chrysanthemi Flos,Astragali Radix and Angelicae Sinensis Radix.Twenty core active ingredients and ten core targets of the novel TCM prescription for treating DR were screened.Moreover,the novel TCM prescription played a crucial role for treating DR by inhibiting inflammatory response,oxidative stress,retinal pigment epithelium cell apoptosis and retinal neovascularization through various pathways,such as the AGE-RAGE signaling pathway in diabetic complications and the MAPK signaling pathway.Finally,molecular docking and MD simulation demonstrated that almost all core active ingredients exhibited satisfactory binding energies to core targets.Conclusions:This study identified a novel TCM prescription and unveiled its multi-component,multi-target and multi-pathway characteristics for treating DR.These findings provide a scientific basis and novel insights into the development of drugs for DR prevention and treatment.
文摘This study aims to explore the application of Bayesian analysis based on neural networks and deep learning in data visualization.The research background is that with the increasing amount and complexity of data,traditional data analysis methods have been unable to meet the needs.Research methods include building neural networks and deep learning models,optimizing and improving them through Bayesian analysis,and applying them to the visualization of large-scale data sets.The results show that the neural network combined with Bayesian analysis and deep learning method can effectively improve the accuracy and efficiency of data visualization,and enhance the intuitiveness and depth of data interpretation.The significance of the research is that it provides a new solution for data visualization in the big data environment and helps to further promote the development and application of data science.