Automatic modulation recognition(AMR)of radiation source signals is a research focus in the field of cognitive radio.However,the AMR of radiation source signals at low SNRs still faces a great challenge.Therefore,the ...Automatic modulation recognition(AMR)of radiation source signals is a research focus in the field of cognitive radio.However,the AMR of radiation source signals at low SNRs still faces a great challenge.Therefore,the AMR method of radiation source signals based on two-dimensional data matrix and improved residual neural network is proposed in this paper.First,the time series of the radiation source signals are reconstructed into two-dimensional data matrix,which greatly simplifies the signal preprocessing process.Second,the depthwise convolution and large-size convolutional kernels based residual neural network(DLRNet)is proposed to improve the feature extraction capability of the AMR model.Finally,the model performs feature extraction and classification on the two-dimensional data matrix to obtain the recognition vector that represents the signal modulation type.Theoretical analysis and simulation results show that the AMR method based on two-dimensional data matrix and improved residual network can significantly improve the accuracy of the AMR method.The recognition accuracy of the proposed method maintains a high level greater than 90% even at -14 dB SNR.展开更多
Aeromagnetic data over the Mamfe Basin have been processed. A regional magnetic gridded dataset was obtained from the Total Magnetic Intensity (TMI) data grid using a 3 × 3 convolution (Hanning) filter to remove ...Aeromagnetic data over the Mamfe Basin have been processed. A regional magnetic gridded dataset was obtained from the Total Magnetic Intensity (TMI) data grid using a 3 × 3 convolution (Hanning) filter to remove regional trends. Major similarities in magnetic field orientation and intensities were observed at identical locations on both the regional and TMI data grids. From the regional and TMI gridded datasets, the residual dataset was generated which represents the very shallow geological features of the basin. Processing this residual data grid using the Source Parameter Imaging (SPI) for magnetic depth suggests that the estimated depths to magnetic sources in the basin range from about 271 m to 3552 m. The highest depths are located in two main locations somewhere around the central portion of the study area which correspond to the area with positive magnetic susceptibilities, as well as the areas extending outwards across the eastern boundary of the study area. Shallow magnetic depths are prominent towards the NW portion of the basin and also correspond to areas of negative magnetic susceptibilities. The basin generally exhibits a variation in depth of magnetic sources with high, average and shallow depths. The presence of intrusive igneous rocks was also observed in this basin. This characteristic is a pointer to the existence of geologic resources of interest for exploration in the basin.展开更多
Cloud Datacenter Network(CDN)providers usually have the option to scale their network structures to allow for far more resource capacities,though such scaling options may come with exponential costs that contradict th...Cloud Datacenter Network(CDN)providers usually have the option to scale their network structures to allow for far more resource capacities,though such scaling options may come with exponential costs that contradict their utility objectives.Yet,besides the cost of the physical assets and network resources,such scaling may also imposemore loads on the electricity power grids to feed the added nodes with the required energy to run and cool,which comes with extra costs too.Thus,those CDNproviders who utilize their resources better can certainly afford their services at lower price-units when compared to others who simply choose the scaling solutions.Resource utilization is a quite challenging process;indeed,clients of CDNs usually tend to exaggerate their true resource requirements when they lease their resources.Service providers are committed to their clients with Service Level Agreements(SLAs).Therefore,any amendment to the resource allocations needs to be approved by the clients first.In this work,we propose deploying a Stackelberg leadership framework to formulate a negotiation game between the cloud service providers and their client tenants.Through this,the providers seek to retrieve those leased unused resources from their clients.Cooperation is not expected from the clients,and they may ask high price units to return their extra resources to the provider’s premises.Hence,to motivate cooperation in such a non-cooperative game,as an extension to theVickery auctions,we developed an incentive-compatible pricingmodel for the returned resources.Moreover,we also proposed building a behavior belief function that shapes the way of negotiation and compensation for each client.Compared to other benchmark models,the assessment results showthat our proposed models provide for timely negotiation schemes,allowing for better resource utilization rates,higher utilities,and grid-friend CDNs.展开更多
The power Internet of Things(IoT)is a significant trend in technology and a requirement for national strategic development.With the deepening digital transformation of the power grid,China’s power system has initiall...The power Internet of Things(IoT)is a significant trend in technology and a requirement for national strategic development.With the deepening digital transformation of the power grid,China’s power system has initially built a power IoT architecture comprising a perception,network,and platform application layer.However,owing to the structural complexity of the power system,the construction of the power IoT continues to face problems such as complex access management of massive heterogeneous equipment,diverse IoT protocol access methods,high concurrency of network communications,and weak data security protection.To address these issues,this study optimizes the existing architecture of the power IoT and designs an integrated management framework for the access of multi-source heterogeneous data in the power IoT,comprising cloud,pipe,edge,and terminal parts.It further reviews and analyzes the key technologies involved in the power IoT,such as the unified management of the physical model,high concurrent access,multi-protocol access,multi-source heterogeneous data storage management,and data security control,to provide a more flexible,efficient,secure,and easy-to-use solution for multi-source heterogeneous data access in the power IoT.展开更多
Multiturn coils is an effective transmitter for transient electromagnetic method(TEM) used in narrow space and complex terrain at presently. However, its high mutual inductance coupling and long turn-off time affect t...Multiturn coils is an effective transmitter for transient electromagnetic method(TEM) used in narrow space and complex terrain at presently. However, its high mutual inductance coupling and long turn-off time affect the quality of later data processing and interpretation. Compared with multiturn coils, the new conical source has low mutual inductance and short turn-off time. Based on the superposition principle, we use Hankel transform and numerical filtering method for forward modelling of the conical source field in the layered-media and explore TEM characteristics excited by this source. We apply improved damped least square inversion to integrated transient electromagnetic(TEM) data. We first invert the induced voltage into similar resistivity and apparent depth, and then use the inverted results as input parameters in the initial model and transform the apparent resistivity data into the frequency domain. Then, damped least square inversion is performed in the frequency domain using the initial model. Subsequently, we use automated model building to search for the extremes and inflection points in the resistivity–depth data that are treated as critical layer parameters. The inversion of theoretical and observed data suggests that the method modifies the resistivity and depth and yields a model of the underground layers.展开更多
MORPAS is a special GIS (geographic information system) software system, based on the MAPGIS platform whose aim is to prospect and evaluate mineral resources quantificationally by synthesizing geological, geophysical,...MORPAS is a special GIS (geographic information system) software system, based on the MAPGIS platform whose aim is to prospect and evaluate mineral resources quantificationally by synthesizing geological, geophysical, geochemical and remote sensing data. It overlays geological database management, geological background and geological abnormality analysis, image processing of remote sensing and comprehensive abnormality analysis, etc.. It puts forward an integrative solution for the application of GIS in basic-level units and the construction of information engineering in the geological field. As the popularization of computer networks and the request of data sharing, it is necessary to extend its functions in data management so that all its data files can be accessed in the network server. This paper utilizes some MAPGIS functions for the second development and ADO (access data object) technique to access multi-source geological data in SQL Server databases. Then remote visiting and congruous management will be realized in the MORPAS system.展开更多
Understanding the origins of potential source rocks and unraveling the intricate connections between reservoir oils and their source formations in the Siwa Basin(Western Desert,Egypt)necessitate a thorough oil-source ...Understanding the origins of potential source rocks and unraveling the intricate connections between reservoir oils and their source formations in the Siwa Basin(Western Desert,Egypt)necessitate a thorough oil-source correlation investigation.This objective is achieved through a meticulous analysis of well-log responses,Rock-Eval pyrolysis,and biomarker data.The analysis of Total Organic Carbon across 31 samples representing Paleozoic formations in the Siwa A-1X well reveals a spectrum of organic richness ranging from 0.17 wt%to 2.04 wt%,thereby highlighting diverse levels of organic content and the presence of both Type II and Type III kerogen.Examination of the fingerprint characteristics of eight samples from the well suggests that the Dhiffah Formation comprises a blend of terrestrial and marine organic matter.Notably,a significant contribution from more oxidized residual organic matter and gas-prone Type III kerogen is observed.Contrarily,the Desouky and Zeitoun formations exhibit mixed organic matter indicative of a transitional environment,and thus featuring a pronounced marine influence within a more reducing setting,which is associated with Type II kerogen.Through analysis of five oil samples from different wells—SIWA L-1X,SIWA R-3X,SIWA D-1X,PTAH 5X,and PTAH 6X,it is evident that terrestrial organic matter,augmented by considerable marine input,was deposited in an oxidizing environment,and contains Type III kerogen.Geochemical scrutiny confirms the coexistence of mixed terrestrial organic matter within varying redox environments.Noteworthy is the uniformity of identified kerogen Types II and III across all samples,known to have potential for hydrocarbon generation.The discovery presented in this paper unveils captivating prospects concerning the genesis of oil in the Jurassic Safa reservoir,suggesting potential links to Paleozoic sources or even originating from the Safa Member itself.These revelations mark a substantial advancement in understanding source rock dynamics and their intricate relationship with reservoir oils within the Siwa Basin.By illuminating the processes of hydrocarbon genesis in the region,this study significantly enriches our knowledge base.展开更多
Real traffic information was analyzed in the statistical characteristics and approximated as a Gaussian time series. A data source model, called two states constant bit rate (TSCBR), was proposed in dynamic traffic mo...Real traffic information was analyzed in the statistical characteristics and approximated as a Gaussian time series. A data source model, called two states constant bit rate (TSCBR), was proposed in dynamic traffic monitoring sensor networks. Analysis of autocorrelation of the models shows that the proposed TSCBR model matches with the statistical characteristics of real data source closely. To further verify the validity of the TSCBR data source model, the performance metrics of power consumption and network lifetime was studied in the evaluation of sensor media access control (SMAC) algorithm. The simulation results show that compared with traditional data source models, TSCBR model can significantly improve accuracy of the algorithm evaluation.展开更多
A magnetic field produced by a current flowing through the plasma grid(PG) is one of the solutions to reduce the collisional loss of negative ions in a negative ion source, which reduces the electron temperature in fr...A magnetic field produced by a current flowing through the plasma grid(PG) is one of the solutions to reduce the collisional loss of negative ions in a negative ion source, which reduces the electron temperature in front of the PG. However, the magnetic field diffused into the driver has some influence on the plasma outflowing. In order to investigate the effect of changing this magnetic field on the outflowing of plasma from the driver, a circular ring(absorber) of high permeability iron has been introduced at the driver exit, which can reduce the magnetic field around it and improve plasma outflowing. With the application of the absorber, the electron density is increased by about 35%, and the extraction current measured from the extraction grid is increased from 1.02 A to 1.29 A. The results of the extraction experiment with cesium injection show that both the extraction grid(EG) current and H-current are increased when the absorber is introduced.展开更多
Sharing data while protecting privacy in the industrial Internet is a significant challenge.Traditional machine learning methods require a combination of all data for training;however,this approach can be limited by d...Sharing data while protecting privacy in the industrial Internet is a significant challenge.Traditional machine learning methods require a combination of all data for training;however,this approach can be limited by data availability and privacy concerns.Federated learning(FL)has gained considerable attention because it allows for decentralized training on multiple local datasets.However,the training data collected by data providers are often non-independent and identically distributed(non-IID),resulting in poor FL performance.This paper proposes a privacy-preserving approach for sharing non-IID data in the industrial Internet using an FL approach based on blockchain technology.To overcome the problem of non-IID data leading to poor training accuracy,we propose dynamically updating the local model based on the divergence of the global and local models.This approach can significantly improve the accuracy of FL training when there is relatively large dispersion.In addition,we design a dynamic gradient clipping algorithm to alleviate the influence of noise on the model accuracy to reduce potential privacy leakage caused by sharing model parameters.Finally,we evaluate the performance of the proposed scheme using commonly used open-source image datasets.The simulation results demonstrate that our method can significantly enhance the accuracy while protecting privacy and maintaining efficiency,thereby providing a new solution to data-sharing and privacy-protection challenges in the industrial Internet.展开更多
With the recent technological developments,massive vehicular ad hoc networks(VANETs)have been established,enabling numerous vehicles and their respective Road Side Unit(RSU)components to communicate with oneanother.Th...With the recent technological developments,massive vehicular ad hoc networks(VANETs)have been established,enabling numerous vehicles and their respective Road Side Unit(RSU)components to communicate with oneanother.The best way to enhance traffic flow for vehicles and traffic management departments is to share thedata they receive.There needs to be more protection for the VANET systems.An effective and safe methodof outsourcing is suggested,which reduces computation costs by achieving data security using a homomorphicmapping based on the conjugate operation of matrices.This research proposes a VANET-based data outsourcingsystem to fix the issues.To keep data outsourcing secure,the suggested model takes cryptography models intoaccount.Fog will keep the generated keys for the purpose of vehicle authentication.For controlling and overseeingthe outsourced data while preserving privacy,the suggested approach considers the Trusted Certified Auditor(TCA).Using the secret key,TCA can identify the genuine identity of VANETs when harmful messages aredetected.The proposed model develops a TCA-based unique static vehicle labeling system using cryptography(TCA-USVLC)for secure data outsourcing and privacy preservation in VANETs.The proposed model calculatesthe trust of vehicles in 16 ms for an average of 180 vehicles and achieves 98.6%accuracy for data encryption toprovide security.The proposedmodel achieved 98.5%accuracy in data outsourcing and 98.6%accuracy in privacypreservation in fog-enabled VANETs.Elliptical curve cryptography models can be applied in the future for betterencryption and decryption rates with lightweight cryptography operations.展开更多
Large-scale wireless sensor networks(WSNs)play a critical role in monitoring dangerous scenarios and responding to medical emergencies.However,the inherent instability and error-prone nature of wireless links present ...Large-scale wireless sensor networks(WSNs)play a critical role in monitoring dangerous scenarios and responding to medical emergencies.However,the inherent instability and error-prone nature of wireless links present significant challenges,necessitating efficient data collection and reliable transmission services.This paper addresses the limitations of existing data transmission and recovery protocols by proposing a systematic end-to-end design tailored for medical event-driven cluster-based large-scale WSNs.The primary goal is to enhance the reliability of data collection and transmission services,ensuring a comprehensive and practical approach.Our approach focuses on refining the hop-count-based routing scheme to achieve fairness in forwarding reliability.Additionally,it emphasizes reliable data collection within clusters and establishes robust data transmission over multiple hops.These systematic improvements are designed to optimize the overall performance of the WSN in real-world scenarios.Simulation results of the proposed protocol validate its exceptional performance compared to other prominent data transmission schemes.The evaluation spans varying sensor densities,wireless channel conditions,and packet transmission rates,showcasing the protocol’s superiority in ensuring reliable and efficient data transfer.Our systematic end-to-end design successfully addresses the challenges posed by the instability of wireless links in large-scaleWSNs.By prioritizing fairness,reliability,and efficiency,the proposed protocol demonstrates its efficacy in enhancing data collection and transmission services,thereby offering a valuable contribution to the field of medical event-drivenWSNs.展开更多
While progress has been made in information source localization,it has overlooked the prevalent friend and adversarial relationships in social networks.This paper addresses this gap by focusing on source localization ...While progress has been made in information source localization,it has overlooked the prevalent friend and adversarial relationships in social networks.This paper addresses this gap by focusing on source localization in signed network models.Leveraging the topological characteristics of signed networks and transforming the propagation probability into effective distance,we propose an optimization method for observer selection.Additionally,by using the reverse propagation algorithm we present a method for information source localization in signed networks.Extensive experimental results demonstrate that a higher proportion of positive edges within signed networks contributes to more favorable source localization,and the higher the ratio of propagation rates between positive and negative edges,the more accurate the source localization becomes.Interestingly,this aligns with our observation that,in reality,the number of friends tends to be greater than the number of adversaries,and the likelihood of information propagation among friends is often higher than among adversaries.In addition,the source located at the periphery of the network is not easy to identify.Furthermore,our proposed observer selection method based on effective distance achieves higher operational efficiency and exhibits higher accuracy in information source localization,compared with three strategies for observer selection based on the classical full-order neighbor coverage.展开更多
Optical survey is an important means for observing resident space objects and space situational awareness.With the application of astronomical techniques and reduction method,wide field of view telescopes have made si...Optical survey is an important means for observing resident space objects and space situational awareness.With the application of astronomical techniques and reduction method,wide field of view telescopes have made significant contributions in discovering and identifying resident space objects.However,with the development of modern optical and electronic technology,the detection limit of instruments and infrastructure has been greatly extended,leading to an extensive number of raw images and many more sources in these images.Challenges arise when reducing these data in terms of traditional measurement and calibration.Based on the amount of data,it is particularly feasible and reliable to apply machine learning algorithms.Here an end-to-end deep learning framework is developed,it is trained with a priori information on raw detections and the automatic detection task is performed on the new data acquired.The closed-loop is evaluated based on consecutive CCD images obtained with a dedicated space debris survey telescope.It is demonstrated that our framework can achieve high performance compared with the traditional method,and with data fusion,the efficiency of the system can be improved without changing hardware or deploying new devices.The technique deserves a wider application in many fields of observational astronomy.展开更多
In source detection in the Tianlai project,locating the interferometric fringe in visibility data accurately will influence downstream tasks drastically,such as physical parameter estimation and weak source exploratio...In source detection in the Tianlai project,locating the interferometric fringe in visibility data accurately will influence downstream tasks drastically,such as physical parameter estimation and weak source exploration.Considering that traditional locating methods are time-consuming and supervised methods require a great quantity of expensive labeled data,in this paper,we first investigate characteristics of interferometric fringes in the simulation and real scenario separately,and integrate an almost parameter-free unsupervised clustering method and seeding filling or eraser algorithm to propose a hierarchical plug and play method to improve location accuracy.Then,we apply our method to locate single and multiple sources’interferometric fringes in simulation data.Next,we apply our method to real data taken from the Tianlai radio telescope array.Finally,we compare with unsupervised methods that are state of the art.These results show that our method has robustness in different scenarios and can improve location measurement accuracy effectively.展开更多
Software Defined Networking(SDN)is programmable by separation of forwarding control through the centralization of the controller.The controller plays the role of the‘brain’that dictates the intelligent part of SDN t...Software Defined Networking(SDN)is programmable by separation of forwarding control through the centralization of the controller.The controller plays the role of the‘brain’that dictates the intelligent part of SDN technology.Various versions of SDN controllers exist as a response to the diverse demands and functions expected of them.There are several SDN controllers available in the open market besides a large number of commercial controllers;some are developed tomeet carrier-grade service levels and one of the recent trends in open-source SDN controllers is the Open Network Operating System(ONOS).This paper presents a comparative study between open source SDN controllers,which are known as Network Controller Platform(NOX),Python-based Network Controller(POX),component-based SDN framework(Ryu),Java-based OpenFlow controller(Floodlight),OpenDayLight(ODL)and ONOS.The discussion is further extended into ONOS architecture,as well as,the evolution of ONOS controllers.This article will review use cases based on ONOS controllers in several application deployments.Moreover,the opportunities and challenges of open source SDN controllers will be discussed,exploring carriergrade ONOS for future real-world deployments,ONOS unique features and identifying the suitable choice of SDN controller for service providers.In addition,we attempt to provide answers to several critical questions relating to the implications of the open-source nature of SDN controllers regarding vendor lock-in,interoperability,and standards compliance,Similarly,real-world use cases of organizations using open-source SDN are highlighted and how the open-source community contributes to the development of SDN controllers.Furthermore,challenges faced by open-source projects,and considerations when choosing an open-source SDN controller are underscored.Then the role of Artificial Intelligence(AI)and Machine Learning(ML)in the evolution of open-source SDN controllers in light of recent research is indicated.In addition,the challenges and limitations associated with deploying open-source SDN controllers in production networks,how can they be mitigated,and finally how opensource SDN controllers handle network security and ensure that network configurations and policies are robust and resilient are presented.Potential opportunities and challenges for future Open SDN deployment are outlined to conclude the article.展开更多
The dissemination of information across various locations is an ubiquitous occurrence,however,prevalent methodologies for multi-source identification frequently overlook the fact that sources may initiate disseminatio...The dissemination of information across various locations is an ubiquitous occurrence,however,prevalent methodologies for multi-source identification frequently overlook the fact that sources may initiate dissemination at distinct initial moments.Although there are many research results of multi-source identification,the challenge of locating sources with varying initiation times using a limited subset of observational nodes remains unresolved.In this study,we provide the backward spread tree theorem and source centrality theorem,and develop a backward spread centrality algorithm to identify all the information sources that trigger the spread at different start times.The proposed algorithm does not require prior knowledge of the number of sources,however,it can estimate both the initial spread moment and the spread duration.The core concept of this algorithm involves inferring suspected sources through source centrality theorem and locating the source from the suspected sources with linear programming.Extensive experiments from synthetic and real network simulation corroborate the superiority of our method in terms of both efficacy and efficiency.Furthermore,we find that our method maintains robustness irrespective of the number of sources and the average degree of network.Compared with classical and state-of-the art source identification methods,our method generally improves the AUROC value by 0.1 to 0.2.展开更多
The proliferation of intelligent,connected Internet of Things(IoT)devices facilitates data collection.However,task workers may be reluctant to participate in data collection due to privacy concerns,and task requesters...The proliferation of intelligent,connected Internet of Things(IoT)devices facilitates data collection.However,task workers may be reluctant to participate in data collection due to privacy concerns,and task requesters may be concerned about the validity of the collected data.Hence,it is vital to evaluate the quality of the data collected by the task workers while protecting privacy in spatial crowdsourcing(SC)data collection tasks with IoT.To this end,this paper proposes a privacy-preserving data reliability evaluation for SC in IoT,named PARE.First,we design a data uploading format using blockchain and Paillier homomorphic cryptosystem,providing unchangeable and traceable data while overcoming privacy concerns.Secondly,based on the uploaded data,we propose a method to determine the approximate correct value region without knowing the exact value.Finally,we offer a data filtering mechanism based on the Paillier cryptosystem using this value region.The evaluation and analysis results show that PARE outperforms the existing solution in terms of performance and privacy protection.展开更多
The 6th generation mobile networks(6G)network is a kind of multi-network interconnection and multi-scenario coexistence network,where multiple network domains break the original fixed boundaries to form connections an...The 6th generation mobile networks(6G)network is a kind of multi-network interconnection and multi-scenario coexistence network,where multiple network domains break the original fixed boundaries to form connections and convergence.In this paper,with the optimization objective of maximizing network utility while ensuring flows performance-centric weighted fairness,this paper designs a reinforcement learning-based cloud-edge autonomous multi-domain data center network architecture that achieves single-domain autonomy and multi-domain collaboration.Due to the conflict between the utility of different flows,the bandwidth fairness allocation problem for various types of flows is formulated by considering different defined reward functions.Regarding the tradeoff between fairness and utility,this paper deals with the corresponding reward functions for the cases where the flows undergo abrupt changes and smooth changes in the flows.In addition,to accommodate the Quality of Service(QoS)requirements for multiple types of flows,this paper proposes a multi-domain autonomous routing algorithm called LSTM+MADDPG.Introducing a Long Short-Term Memory(LSTM)layer in the actor and critic networks,more information about temporal continuity is added,further enhancing the adaptive ability changes in the dynamic network environment.The LSTM+MADDPG algorithm is compared with the latest reinforcement learning algorithm by conducting experiments on real network topology and traffic traces,and the experimental results show that LSTM+MADDPG improves the delay convergence speed by 14.6%and delays the start moment of packet loss by 18.2%compared with other algorithms.展开更多
Mechanically cleaved two-dimensional materials are random in size and thickness.Recognizing atomically thin flakes by human experts is inefficient and unsuitable for scalable production.Deep learning algorithms have b...Mechanically cleaved two-dimensional materials are random in size and thickness.Recognizing atomically thin flakes by human experts is inefficient and unsuitable for scalable production.Deep learning algorithms have been adopted as an alternative,nevertheless a major challenge is a lack of sufficient actual training images.Here we report the generation of synthetic two-dimensional materials images using StyleGAN3 to complement the dataset.DeepLabv3Plus network is trained with the synthetic images which reduces overfitting and improves recognition accuracy to over 90%.A semi-supervisory technique for labeling images is introduced to reduce manual efforts.The sharper edges recognized by this method facilitate material stacking with precise edge alignment,which benefits exploring novel properties of layered-material devices that crucially depend on the interlayer twist-angle.This feasible and efficient method allows for the rapid and high-quality manufacturing of atomically thin materials and devices.展开更多
基金National Natural Science Foundation of China under Grant No.61973037China Postdoctoral Science Foundation under Grant No.2022M720419。
文摘Automatic modulation recognition(AMR)of radiation source signals is a research focus in the field of cognitive radio.However,the AMR of radiation source signals at low SNRs still faces a great challenge.Therefore,the AMR method of radiation source signals based on two-dimensional data matrix and improved residual neural network is proposed in this paper.First,the time series of the radiation source signals are reconstructed into two-dimensional data matrix,which greatly simplifies the signal preprocessing process.Second,the depthwise convolution and large-size convolutional kernels based residual neural network(DLRNet)is proposed to improve the feature extraction capability of the AMR model.Finally,the model performs feature extraction and classification on the two-dimensional data matrix to obtain the recognition vector that represents the signal modulation type.Theoretical analysis and simulation results show that the AMR method based on two-dimensional data matrix and improved residual network can significantly improve the accuracy of the AMR method.The recognition accuracy of the proposed method maintains a high level greater than 90% even at -14 dB SNR.
文摘Aeromagnetic data over the Mamfe Basin have been processed. A regional magnetic gridded dataset was obtained from the Total Magnetic Intensity (TMI) data grid using a 3 × 3 convolution (Hanning) filter to remove regional trends. Major similarities in magnetic field orientation and intensities were observed at identical locations on both the regional and TMI data grids. From the regional and TMI gridded datasets, the residual dataset was generated which represents the very shallow geological features of the basin. Processing this residual data grid using the Source Parameter Imaging (SPI) for magnetic depth suggests that the estimated depths to magnetic sources in the basin range from about 271 m to 3552 m. The highest depths are located in two main locations somewhere around the central portion of the study area which correspond to the area with positive magnetic susceptibilities, as well as the areas extending outwards across the eastern boundary of the study area. Shallow magnetic depths are prominent towards the NW portion of the basin and also correspond to areas of negative magnetic susceptibilities. The basin generally exhibits a variation in depth of magnetic sources with high, average and shallow depths. The presence of intrusive igneous rocks was also observed in this basin. This characteristic is a pointer to the existence of geologic resources of interest for exploration in the basin.
基金The Deanship of Scientific Research at Hashemite University partially funds this workDeanship of Scientific Research at the Northern Border University,Arar,KSA for funding this research work through the project number“NBU-FFR-2024-1580-08”.
文摘Cloud Datacenter Network(CDN)providers usually have the option to scale their network structures to allow for far more resource capacities,though such scaling options may come with exponential costs that contradict their utility objectives.Yet,besides the cost of the physical assets and network resources,such scaling may also imposemore loads on the electricity power grids to feed the added nodes with the required energy to run and cool,which comes with extra costs too.Thus,those CDNproviders who utilize their resources better can certainly afford their services at lower price-units when compared to others who simply choose the scaling solutions.Resource utilization is a quite challenging process;indeed,clients of CDNs usually tend to exaggerate their true resource requirements when they lease their resources.Service providers are committed to their clients with Service Level Agreements(SLAs).Therefore,any amendment to the resource allocations needs to be approved by the clients first.In this work,we propose deploying a Stackelberg leadership framework to formulate a negotiation game between the cloud service providers and their client tenants.Through this,the providers seek to retrieve those leased unused resources from their clients.Cooperation is not expected from the clients,and they may ask high price units to return their extra resources to the provider’s premises.Hence,to motivate cooperation in such a non-cooperative game,as an extension to theVickery auctions,we developed an incentive-compatible pricingmodel for the returned resources.Moreover,we also proposed building a behavior belief function that shapes the way of negotiation and compensation for each client.Compared to other benchmark models,the assessment results showthat our proposed models provide for timely negotiation schemes,allowing for better resource utilization rates,higher utilities,and grid-friend CDNs.
基金supported by the National Key Research and Development Program of China(grant number 2019YFE0123600)。
文摘The power Internet of Things(IoT)is a significant trend in technology and a requirement for national strategic development.With the deepening digital transformation of the power grid,China’s power system has initially built a power IoT architecture comprising a perception,network,and platform application layer.However,owing to the structural complexity of the power system,the construction of the power IoT continues to face problems such as complex access management of massive heterogeneous equipment,diverse IoT protocol access methods,high concurrency of network communications,and weak data security protection.To address these issues,this study optimizes the existing architecture of the power IoT and designs an integrated management framework for the access of multi-source heterogeneous data in the power IoT,comprising cloud,pipe,edge,and terminal parts.It further reviews and analyzes the key technologies involved in the power IoT,such as the unified management of the physical model,high concurrent access,multi-protocol access,multi-source heterogeneous data storage management,and data security control,to provide a more flexible,efficient,secure,and easy-to-use solution for multi-source heterogeneous data access in the power IoT.
基金supported by the National Natural Science Foundation of China(Nos.41564001,41674133,41572185,and 41604104)the Distinguished Young Talent Foundation of Jiangxi Province(No.20171BCB23068)
文摘Multiturn coils is an effective transmitter for transient electromagnetic method(TEM) used in narrow space and complex terrain at presently. However, its high mutual inductance coupling and long turn-off time affect the quality of later data processing and interpretation. Compared with multiturn coils, the new conical source has low mutual inductance and short turn-off time. Based on the superposition principle, we use Hankel transform and numerical filtering method for forward modelling of the conical source field in the layered-media and explore TEM characteristics excited by this source. We apply improved damped least square inversion to integrated transient electromagnetic(TEM) data. We first invert the induced voltage into similar resistivity and apparent depth, and then use the inverted results as input parameters in the initial model and transform the apparent resistivity data into the frequency domain. Then, damped least square inversion is performed in the frequency domain using the initial model. Subsequently, we use automated model building to search for the extremes and inflection points in the resistivity–depth data that are treated as critical layer parameters. The inversion of theoretical and observed data suggests that the method modifies the resistivity and depth and yields a model of the underground layers.
文摘MORPAS is a special GIS (geographic information system) software system, based on the MAPGIS platform whose aim is to prospect and evaluate mineral resources quantificationally by synthesizing geological, geophysical, geochemical and remote sensing data. It overlays geological database management, geological background and geological abnormality analysis, image processing of remote sensing and comprehensive abnormality analysis, etc.. It puts forward an integrative solution for the application of GIS in basic-level units and the construction of information engineering in the geological field. As the popularization of computer networks and the request of data sharing, it is necessary to extend its functions in data management so that all its data files can be accessed in the network server. This paper utilizes some MAPGIS functions for the second development and ADO (access data object) technique to access multi-source geological data in SQL Server databases. Then remote visiting and congruous management will be realized in the MORPAS system.
基金the research project is funded by Abdullah Alrushaid Chair for Earth Science Remote Sensing Research at King Saud University,Riyadh,Saudi Arabia.。
文摘Understanding the origins of potential source rocks and unraveling the intricate connections between reservoir oils and their source formations in the Siwa Basin(Western Desert,Egypt)necessitate a thorough oil-source correlation investigation.This objective is achieved through a meticulous analysis of well-log responses,Rock-Eval pyrolysis,and biomarker data.The analysis of Total Organic Carbon across 31 samples representing Paleozoic formations in the Siwa A-1X well reveals a spectrum of organic richness ranging from 0.17 wt%to 2.04 wt%,thereby highlighting diverse levels of organic content and the presence of both Type II and Type III kerogen.Examination of the fingerprint characteristics of eight samples from the well suggests that the Dhiffah Formation comprises a blend of terrestrial and marine organic matter.Notably,a significant contribution from more oxidized residual organic matter and gas-prone Type III kerogen is observed.Contrarily,the Desouky and Zeitoun formations exhibit mixed organic matter indicative of a transitional environment,and thus featuring a pronounced marine influence within a more reducing setting,which is associated with Type II kerogen.Through analysis of five oil samples from different wells—SIWA L-1X,SIWA R-3X,SIWA D-1X,PTAH 5X,and PTAH 6X,it is evident that terrestrial organic matter,augmented by considerable marine input,was deposited in an oxidizing environment,and contains Type III kerogen.Geochemical scrutiny confirms the coexistence of mixed terrestrial organic matter within varying redox environments.Noteworthy is the uniformity of identified kerogen Types II and III across all samples,known to have potential for hydrocarbon generation.The discovery presented in this paper unveils captivating prospects concerning the genesis of oil in the Jurassic Safa reservoir,suggesting potential links to Paleozoic sources or even originating from the Safa Member itself.These revelations mark a substantial advancement in understanding source rock dynamics and their intricate relationship with reservoir oils within the Siwa Basin.By illuminating the processes of hydrocarbon genesis in the region,this study significantly enriches our knowledge base.
基金The National Natural Science Foundation ofChia(No60372076)The Important cienceand Technology Key Item of Shanghai Science and Technology Bureau ( No05dz15004)
文摘Real traffic information was analyzed in the statistical characteristics and approximated as a Gaussian time series. A data source model, called two states constant bit rate (TSCBR), was proposed in dynamic traffic monitoring sensor networks. Analysis of autocorrelation of the models shows that the proposed TSCBR model matches with the statistical characteristics of real data source closely. To further verify the validity of the TSCBR data source model, the performance metrics of power consumption and network lifetime was studied in the evaluation of sensor media access control (SMAC) algorithm. The simulation results show that compared with traditional data source models, TSCBR model can significantly improve accuracy of the algorithm evaluation.
基金supported by the Comprehensive Research Facility for Fusion Technology Program of China(No.2018-000052-73-01-001228)National Natural Science Foundation of China(No.11975264)。
文摘A magnetic field produced by a current flowing through the plasma grid(PG) is one of the solutions to reduce the collisional loss of negative ions in a negative ion source, which reduces the electron temperature in front of the PG. However, the magnetic field diffused into the driver has some influence on the plasma outflowing. In order to investigate the effect of changing this magnetic field on the outflowing of plasma from the driver, a circular ring(absorber) of high permeability iron has been introduced at the driver exit, which can reduce the magnetic field around it and improve plasma outflowing. With the application of the absorber, the electron density is increased by about 35%, and the extraction current measured from the extraction grid is increased from 1.02 A to 1.29 A. The results of the extraction experiment with cesium injection show that both the extraction grid(EG) current and H-current are increased when the absorber is introduced.
基金This work was supported by the National Key R&D Program of China under Grant 2023YFB2703802the Hunan Province Innovation and Entrepreneurship Training Program for College Students S202311528073.
文摘Sharing data while protecting privacy in the industrial Internet is a significant challenge.Traditional machine learning methods require a combination of all data for training;however,this approach can be limited by data availability and privacy concerns.Federated learning(FL)has gained considerable attention because it allows for decentralized training on multiple local datasets.However,the training data collected by data providers are often non-independent and identically distributed(non-IID),resulting in poor FL performance.This paper proposes a privacy-preserving approach for sharing non-IID data in the industrial Internet using an FL approach based on blockchain technology.To overcome the problem of non-IID data leading to poor training accuracy,we propose dynamically updating the local model based on the divergence of the global and local models.This approach can significantly improve the accuracy of FL training when there is relatively large dispersion.In addition,we design a dynamic gradient clipping algorithm to alleviate the influence of noise on the model accuracy to reduce potential privacy leakage caused by sharing model parameters.Finally,we evaluate the performance of the proposed scheme using commonly used open-source image datasets.The simulation results demonstrate that our method can significantly enhance the accuracy while protecting privacy and maintaining efficiency,thereby providing a new solution to data-sharing and privacy-protection challenges in the industrial Internet.
文摘With the recent technological developments,massive vehicular ad hoc networks(VANETs)have been established,enabling numerous vehicles and their respective Road Side Unit(RSU)components to communicate with oneanother.The best way to enhance traffic flow for vehicles and traffic management departments is to share thedata they receive.There needs to be more protection for the VANET systems.An effective and safe methodof outsourcing is suggested,which reduces computation costs by achieving data security using a homomorphicmapping based on the conjugate operation of matrices.This research proposes a VANET-based data outsourcingsystem to fix the issues.To keep data outsourcing secure,the suggested model takes cryptography models intoaccount.Fog will keep the generated keys for the purpose of vehicle authentication.For controlling and overseeingthe outsourced data while preserving privacy,the suggested approach considers the Trusted Certified Auditor(TCA).Using the secret key,TCA can identify the genuine identity of VANETs when harmful messages aredetected.The proposed model develops a TCA-based unique static vehicle labeling system using cryptography(TCA-USVLC)for secure data outsourcing and privacy preservation in VANETs.The proposed model calculatesthe trust of vehicles in 16 ms for an average of 180 vehicles and achieves 98.6%accuracy for data encryption toprovide security.The proposedmodel achieved 98.5%accuracy in data outsourcing and 98.6%accuracy in privacypreservation in fog-enabled VANETs.Elliptical curve cryptography models can be applied in the future for betterencryption and decryption rates with lightweight cryptography operations.
文摘Large-scale wireless sensor networks(WSNs)play a critical role in monitoring dangerous scenarios and responding to medical emergencies.However,the inherent instability and error-prone nature of wireless links present significant challenges,necessitating efficient data collection and reliable transmission services.This paper addresses the limitations of existing data transmission and recovery protocols by proposing a systematic end-to-end design tailored for medical event-driven cluster-based large-scale WSNs.The primary goal is to enhance the reliability of data collection and transmission services,ensuring a comprehensive and practical approach.Our approach focuses on refining the hop-count-based routing scheme to achieve fairness in forwarding reliability.Additionally,it emphasizes reliable data collection within clusters and establishes robust data transmission over multiple hops.These systematic improvements are designed to optimize the overall performance of the WSN in real-world scenarios.Simulation results of the proposed protocol validate its exceptional performance compared to other prominent data transmission schemes.The evaluation spans varying sensor densities,wireless channel conditions,and packet transmission rates,showcasing the protocol’s superiority in ensuring reliable and efficient data transfer.Our systematic end-to-end design successfully addresses the challenges posed by the instability of wireless links in large-scaleWSNs.By prioritizing fairness,reliability,and efficiency,the proposed protocol demonstrates its efficacy in enhancing data collection and transmission services,thereby offering a valuable contribution to the field of medical event-drivenWSNs.
基金Project supported by the National Natural Science Foundation of China(Grant Nos.62103375 and 62006106)the Zhejiang Provincial Philosophy and Social Science Planning Project(Grant No.22NDJC009Z)+1 种基金the Education Ministry Humanities and Social Science Foundation of China(Grant Nos.19YJCZH056 and 21YJC630120)the Natural Science Foundation of Zhejiang Province of China(Grant Nos.LY23F030003 and LQ21F020005).
文摘While progress has been made in information source localization,it has overlooked the prevalent friend and adversarial relationships in social networks.This paper addresses this gap by focusing on source localization in signed network models.Leveraging the topological characteristics of signed networks and transforming the propagation probability into effective distance,we propose an optimization method for observer selection.Additionally,by using the reverse propagation algorithm we present a method for information source localization in signed networks.Extensive experimental results demonstrate that a higher proportion of positive edges within signed networks contributes to more favorable source localization,and the higher the ratio of propagation rates between positive and negative edges,the more accurate the source localization becomes.Interestingly,this aligns with our observation that,in reality,the number of friends tends to be greater than the number of adversaries,and the likelihood of information propagation among friends is often higher than among adversaries.In addition,the source located at the periphery of the network is not easy to identify.Furthermore,our proposed observer selection method based on effective distance achieves higher operational efficiency and exhibits higher accuracy in information source localization,compared with three strategies for observer selection based on the classical full-order neighbor coverage.
基金funded by the National Natural Science Foundation of China(NSFC,grant Nos.12473079 and 12073082)the National Key R&D Program of China(No.2023YFF0725300)。
文摘Optical survey is an important means for observing resident space objects and space situational awareness.With the application of astronomical techniques and reduction method,wide field of view telescopes have made significant contributions in discovering and identifying resident space objects.However,with the development of modern optical and electronic technology,the detection limit of instruments and infrastructure has been greatly extended,leading to an extensive number of raw images and many more sources in these images.Challenges arise when reducing these data in terms of traditional measurement and calibration.Based on the amount of data,it is particularly feasible and reliable to apply machine learning algorithms.Here an end-to-end deep learning framework is developed,it is trained with a priori information on raw detections and the automatic detection task is performed on the new data acquired.The closed-loop is evaluated based on consecutive CCD images obtained with a dedicated space debris survey telescope.It is demonstrated that our framework can achieve high performance compared with the traditional method,and with data fusion,the efficiency of the system can be improved without changing hardware or deploying new devices.The technique deserves a wider application in many fields of observational astronomy.
基金supported by the National Natural Science Foundation of China(NSFC,grant Nos.42172323 and 12371454)。
文摘In source detection in the Tianlai project,locating the interferometric fringe in visibility data accurately will influence downstream tasks drastically,such as physical parameter estimation and weak source exploration.Considering that traditional locating methods are time-consuming and supervised methods require a great quantity of expensive labeled data,in this paper,we first investigate characteristics of interferometric fringes in the simulation and real scenario separately,and integrate an almost parameter-free unsupervised clustering method and seeding filling or eraser algorithm to propose a hierarchical plug and play method to improve location accuracy.Then,we apply our method to locate single and multiple sources’interferometric fringes in simulation data.Next,we apply our method to real data taken from the Tianlai radio telescope array.Finally,we compare with unsupervised methods that are state of the art.These results show that our method has robustness in different scenarios and can improve location measurement accuracy effectively.
基金supported by UniversitiKebangsaan Malaysia,under Dana Impak Perdana 2.0.(Ref:DIP–2022–020).
文摘Software Defined Networking(SDN)is programmable by separation of forwarding control through the centralization of the controller.The controller plays the role of the‘brain’that dictates the intelligent part of SDN technology.Various versions of SDN controllers exist as a response to the diverse demands and functions expected of them.There are several SDN controllers available in the open market besides a large number of commercial controllers;some are developed tomeet carrier-grade service levels and one of the recent trends in open-source SDN controllers is the Open Network Operating System(ONOS).This paper presents a comparative study between open source SDN controllers,which are known as Network Controller Platform(NOX),Python-based Network Controller(POX),component-based SDN framework(Ryu),Java-based OpenFlow controller(Floodlight),OpenDayLight(ODL)and ONOS.The discussion is further extended into ONOS architecture,as well as,the evolution of ONOS controllers.This article will review use cases based on ONOS controllers in several application deployments.Moreover,the opportunities and challenges of open source SDN controllers will be discussed,exploring carriergrade ONOS for future real-world deployments,ONOS unique features and identifying the suitable choice of SDN controller for service providers.In addition,we attempt to provide answers to several critical questions relating to the implications of the open-source nature of SDN controllers regarding vendor lock-in,interoperability,and standards compliance,Similarly,real-world use cases of organizations using open-source SDN are highlighted and how the open-source community contributes to the development of SDN controllers.Furthermore,challenges faced by open-source projects,and considerations when choosing an open-source SDN controller are underscored.Then the role of Artificial Intelligence(AI)and Machine Learning(ML)in the evolution of open-source SDN controllers in light of recent research is indicated.In addition,the challenges and limitations associated with deploying open-source SDN controllers in production networks,how can they be mitigated,and finally how opensource SDN controllers handle network security and ensure that network configurations and policies are robust and resilient are presented.Potential opportunities and challenges for future Open SDN deployment are outlined to conclude the article.
基金Project supported by the National Natural Science Foundation of China(Grant Nos.62103375,62006106,61877055,and 62171413)the Philosophy and Social Science Planning Project of Zhejinag Province,China(Grant No.22NDJC009Z)+1 种基金the Education Ministry Humanities and Social Science Foundation of China(Grant No.19YJCZH056)the Natural Science Foundation of Zhejiang Province,China(Grant Nos.LY23F030003,LY22F030006,and LQ21F020005).
文摘The dissemination of information across various locations is an ubiquitous occurrence,however,prevalent methodologies for multi-source identification frequently overlook the fact that sources may initiate dissemination at distinct initial moments.Although there are many research results of multi-source identification,the challenge of locating sources with varying initiation times using a limited subset of observational nodes remains unresolved.In this study,we provide the backward spread tree theorem and source centrality theorem,and develop a backward spread centrality algorithm to identify all the information sources that trigger the spread at different start times.The proposed algorithm does not require prior knowledge of the number of sources,however,it can estimate both the initial spread moment and the spread duration.The core concept of this algorithm involves inferring suspected sources through source centrality theorem and locating the source from the suspected sources with linear programming.Extensive experiments from synthetic and real network simulation corroborate the superiority of our method in terms of both efficacy and efficiency.Furthermore,we find that our method maintains robustness irrespective of the number of sources and the average degree of network.Compared with classical and state-of-the art source identification methods,our method generally improves the AUROC value by 0.1 to 0.2.
基金This work was supported by the National Natural Science Foundation of China under Grant 62233003the National Key Research and Development Program of China under Grant 2020YFB1708602.
文摘The proliferation of intelligent,connected Internet of Things(IoT)devices facilitates data collection.However,task workers may be reluctant to participate in data collection due to privacy concerns,and task requesters may be concerned about the validity of the collected data.Hence,it is vital to evaluate the quality of the data collected by the task workers while protecting privacy in spatial crowdsourcing(SC)data collection tasks with IoT.To this end,this paper proposes a privacy-preserving data reliability evaluation for SC in IoT,named PARE.First,we design a data uploading format using blockchain and Paillier homomorphic cryptosystem,providing unchangeable and traceable data while overcoming privacy concerns.Secondly,based on the uploaded data,we propose a method to determine the approximate correct value region without knowing the exact value.Finally,we offer a data filtering mechanism based on the Paillier cryptosystem using this value region.The evaluation and analysis results show that PARE outperforms the existing solution in terms of performance and privacy protection.
文摘The 6th generation mobile networks(6G)network is a kind of multi-network interconnection and multi-scenario coexistence network,where multiple network domains break the original fixed boundaries to form connections and convergence.In this paper,with the optimization objective of maximizing network utility while ensuring flows performance-centric weighted fairness,this paper designs a reinforcement learning-based cloud-edge autonomous multi-domain data center network architecture that achieves single-domain autonomy and multi-domain collaboration.Due to the conflict between the utility of different flows,the bandwidth fairness allocation problem for various types of flows is formulated by considering different defined reward functions.Regarding the tradeoff between fairness and utility,this paper deals with the corresponding reward functions for the cases where the flows undergo abrupt changes and smooth changes in the flows.In addition,to accommodate the Quality of Service(QoS)requirements for multiple types of flows,this paper proposes a multi-domain autonomous routing algorithm called LSTM+MADDPG.Introducing a Long Short-Term Memory(LSTM)layer in the actor and critic networks,more information about temporal continuity is added,further enhancing the adaptive ability changes in the dynamic network environment.The LSTM+MADDPG algorithm is compared with the latest reinforcement learning algorithm by conducting experiments on real network topology and traffic traces,and the experimental results show that LSTM+MADDPG improves the delay convergence speed by 14.6%and delays the start moment of packet loss by 18.2%compared with other algorithms.
基金Project supported by the National Key Research and Development Program of China(Grant No.2022YFB2803900)the National Natural Science Foundation of China(Grant Nos.61974075 and 61704121)+2 种基金the Natural Science Foundation of Tianjin Municipality(Grant Nos.22JCZDJC00460 and 19JCQNJC00700)Tianjin Municipal Education Commission(Grant No.2019KJ028)Fundamental Research Funds for the Central Universities(Grant No.22JCZDJC00460).
文摘Mechanically cleaved two-dimensional materials are random in size and thickness.Recognizing atomically thin flakes by human experts is inefficient and unsuitable for scalable production.Deep learning algorithms have been adopted as an alternative,nevertheless a major challenge is a lack of sufficient actual training images.Here we report the generation of synthetic two-dimensional materials images using StyleGAN3 to complement the dataset.DeepLabv3Plus network is trained with the synthetic images which reduces overfitting and improves recognition accuracy to over 90%.A semi-supervisory technique for labeling images is introduced to reduce manual efforts.The sharper edges recognized by this method facilitate material stacking with precise edge alignment,which benefits exploring novel properties of layered-material devices that crucially depend on the interlayer twist-angle.This feasible and efficient method allows for the rapid and high-quality manufacturing of atomically thin materials and devices.