Automatic modulation recognition(AMR)of radiation source signals is a research focus in the field of cognitive radio.However,the AMR of radiation source signals at low SNRs still faces a great challenge.Therefore,the ...Automatic modulation recognition(AMR)of radiation source signals is a research focus in the field of cognitive radio.However,the AMR of radiation source signals at low SNRs still faces a great challenge.Therefore,the AMR method of radiation source signals based on two-dimensional data matrix and improved residual neural network is proposed in this paper.First,the time series of the radiation source signals are reconstructed into two-dimensional data matrix,which greatly simplifies the signal preprocessing process.Second,the depthwise convolution and large-size convolutional kernels based residual neural network(DLRNet)is proposed to improve the feature extraction capability of the AMR model.Finally,the model performs feature extraction and classification on the two-dimensional data matrix to obtain the recognition vector that represents the signal modulation type.Theoretical analysis and simulation results show that the AMR method based on two-dimensional data matrix and improved residual network can significantly improve the accuracy of the AMR method.The recognition accuracy of the proposed method maintains a high level greater than 90% even at -14 dB SNR.展开更多
Aeromagnetic data over the Mamfe Basin have been processed. A regional magnetic gridded dataset was obtained from the Total Magnetic Intensity (TMI) data grid using a 3 × 3 convolution (Hanning) filter to remove ...Aeromagnetic data over the Mamfe Basin have been processed. A regional magnetic gridded dataset was obtained from the Total Magnetic Intensity (TMI) data grid using a 3 × 3 convolution (Hanning) filter to remove regional trends. Major similarities in magnetic field orientation and intensities were observed at identical locations on both the regional and TMI data grids. From the regional and TMI gridded datasets, the residual dataset was generated which represents the very shallow geological features of the basin. Processing this residual data grid using the Source Parameter Imaging (SPI) for magnetic depth suggests that the estimated depths to magnetic sources in the basin range from about 271 m to 3552 m. The highest depths are located in two main locations somewhere around the central portion of the study area which correspond to the area with positive magnetic susceptibilities, as well as the areas extending outwards across the eastern boundary of the study area. Shallow magnetic depths are prominent towards the NW portion of the basin and also correspond to areas of negative magnetic susceptibilities. The basin generally exhibits a variation in depth of magnetic sources with high, average and shallow depths. The presence of intrusive igneous rocks was also observed in this basin. This characteristic is a pointer to the existence of geologic resources of interest for exploration in the basin.展开更多
In source detection in the Tianlai project,locating the interferometric fringe in visibility data accurately will influence downstream tasks drastically,such as physical parameter estimation and weak source exploratio...In source detection in the Tianlai project,locating the interferometric fringe in visibility data accurately will influence downstream tasks drastically,such as physical parameter estimation and weak source exploration.Considering that traditional locating methods are time-consuming and supervised methods require a great quantity of expensive labeled data,in this paper,we first investigate characteristics of interferometric fringes in the simulation and real scenario separately,and integrate an almost parameter-free unsupervised clustering method and seeding filling or eraser algorithm to propose a hierarchical plug and play method to improve location accuracy.Then,we apply our method to locate single and multiple sources’interferometric fringes in simulation data.Next,we apply our method to real data taken from the Tianlai radio telescope array.Finally,we compare with unsupervised methods that are state of the art.These results show that our method has robustness in different scenarios and can improve location measurement accuracy effectively.展开更多
Cloud Datacenter Network(CDN)providers usually have the option to scale their network structures to allow for far more resource capacities,though such scaling options may come with exponential costs that contradict th...Cloud Datacenter Network(CDN)providers usually have the option to scale their network structures to allow for far more resource capacities,though such scaling options may come with exponential costs that contradict their utility objectives.Yet,besides the cost of the physical assets and network resources,such scaling may also imposemore loads on the electricity power grids to feed the added nodes with the required energy to run and cool,which comes with extra costs too.Thus,those CDNproviders who utilize their resources better can certainly afford their services at lower price-units when compared to others who simply choose the scaling solutions.Resource utilization is a quite challenging process;indeed,clients of CDNs usually tend to exaggerate their true resource requirements when they lease their resources.Service providers are committed to their clients with Service Level Agreements(SLAs).Therefore,any amendment to the resource allocations needs to be approved by the clients first.In this work,we propose deploying a Stackelberg leadership framework to formulate a negotiation game between the cloud service providers and their client tenants.Through this,the providers seek to retrieve those leased unused resources from their clients.Cooperation is not expected from the clients,and they may ask high price units to return their extra resources to the provider’s premises.Hence,to motivate cooperation in such a non-cooperative game,as an extension to theVickery auctions,we developed an incentive-compatible pricingmodel for the returned resources.Moreover,we also proposed building a behavior belief function that shapes the way of negotiation and compensation for each client.Compared to other benchmark models,the assessment results showthat our proposed models provide for timely negotiation schemes,allowing for better resource utilization rates,higher utilities,and grid-friend CDNs.展开更多
The power Internet of Things(IoT)is a significant trend in technology and a requirement for national strategic development.With the deepening digital transformation of the power grid,China’s power system has initiall...The power Internet of Things(IoT)is a significant trend in technology and a requirement for national strategic development.With the deepening digital transformation of the power grid,China’s power system has initially built a power IoT architecture comprising a perception,network,and platform application layer.However,owing to the structural complexity of the power system,the construction of the power IoT continues to face problems such as complex access management of massive heterogeneous equipment,diverse IoT protocol access methods,high concurrency of network communications,and weak data security protection.To address these issues,this study optimizes the existing architecture of the power IoT and designs an integrated management framework for the access of multi-source heterogeneous data in the power IoT,comprising cloud,pipe,edge,and terminal parts.It further reviews and analyzes the key technologies involved in the power IoT,such as the unified management of the physical model,high concurrent access,multi-protocol access,multi-source heterogeneous data storage management,and data security control,to provide a more flexible,efficient,secure,and easy-to-use solution for multi-source heterogeneous data access in the power IoT.展开更多
Urban geography has always been concerned about the influence of human settlements on urban vitality,but few studies reveal the influence of human settlements on urban vitality at a micro-scale.This paper analyzes the...Urban geography has always been concerned about the influence of human settlements on urban vitality,but few studies reveal the influence of human settlements on urban vitality at a micro-scale.This paper analyzes the spatial distribution characteristics of human settlements’quality and urban vitality at the micro-scale using Geodetectors and geographic weighted regression to analyze the relationship between human settlements and urban vitality.The results are shown as follows:there is still a significant development space for human settlements quality in Shahekou District,with obvious spatial dependence characteristics and significant gaps between various systems;the urban vitality of Shahekou District has obvious timeliness,and the urban vitality undergoes significant changes over time,which is related to the human settlements quality.The spatial distribution presents a single core spatial distribution structure with strong relative stability.The spatial distribution of cold and hot spots shows a pattern of“high in the north and low in the south,high in the east and low in the west”,with an increasing trend from southwest to northeast;the reachability of public transport has a significant impact on urban vitality.Its synergy with other variables is the leading force forming the spatial distribution of urban vitality.The environmental system,support system and social system are the significant factors affecting the urban vitality of Shahekou District.展开更多
Long runout landslides involve a massive amount of energy and can be extremely hazardous owing to their long movement distance,high mobility and strong destructive power.Numerical methods have been widely used to pred...Long runout landslides involve a massive amount of energy and can be extremely hazardous owing to their long movement distance,high mobility and strong destructive power.Numerical methods have been widely used to predict the landslide runout but a fundamental problem remained is how to determine the reliable numerical parameters.This study proposes a framework to predict the runout of potential landslides through multi-source data collaboration and numerical analysis of historical landslide events.Specifically,for the historical landslide cases,the landslide-induced seismic signal,geophysical surveys,and possible in-situ drone/phone videos(multi-source data collaboration)can validate the numerical results in terms of landslide dynamics and deposit features and help calibrate the numerical(rheological)parameters.Subsequently,the calibrated numerical parameters can be used to numerically predict the runout of potential landslides in the region with a similar geological setting to the recorded events.Application of the runout prediction approach to the 2020 Jiashanying landslide in Guizhou,China gives reasonable results in comparison to the field observations.The numerical parameters are determined from the multi-source data collaboration analysis of a historical case in the region(2019 Shuicheng landslide).The proposed framework for landslide runout prediction can be of great utility for landslide risk assessment and disaster reduction in mountainous regions worldwide.展开更多
In the 21st century,with the development of the Internet,mobile devices,and information technology,society has entered a new era:the era of big data.With the help of big data technology,enterprises can obtain massive ...In the 21st century,with the development of the Internet,mobile devices,and information technology,society has entered a new era:the era of big data.With the help of big data technology,enterprises can obtain massive market and consumer data,realize in-depth analysis of business and market,and enable enterprises to have a deeper understanding of consumer needs,preferences,and behaviors.At the same time,big data technology can also help enterprises carry out human resource management innovation and improve the performance and competitiveness of enterprises.Of course,from another perspective,enterprises in this era are also facing severe challenges.In the face of massive data processing and analysis,it requires superb data processing and analysis capabilities.Secondly,enterprises need to reconstruct their management system to adapt to the changes in the era of big data.Enterprises must treat data as assets and establish a perfect data management system.In addition,enterprises also need to pay attention to protecting customer privacy and data security to avoid data leakage and abuse.In this context,this paper will explore the thinking of enterprise human resource management innovation in the era of big data,and put forward some suggestions on enterprise human resource management innovation.展开更多
With the continuous development of big data technology,the digital transformation of enterprise human resource management has become a development trend.Human resources is one of the most important resources of enterp...With the continuous development of big data technology,the digital transformation of enterprise human resource management has become a development trend.Human resources is one of the most important resources of enterprises,which is crucial to the competitiveness of enterprises.Enterprises need to attract,retain,and motivate excellent employees,thereby enhancing the innovation ability of enterprises and improving competitiveness and market share in the market.To maintain advantages in the fierce market competition,enterprises need to adopt more scientific and effective human resource management methods to enhance organizational efficiency and competitiveness.At the same time,this paper analyzes the dilemma faced by enterprise human resource management,points out the new characteristics of enterprise human resource management enabled by big data,and puts forward feasible suggestions for enterprise digital transformation.展开更多
Multi-Source data plays an important role in the evolution of media convergence.Its fusion processing enables the further mining of data and utilization of data value and broadens the path for the sharing and dissemin...Multi-Source data plays an important role in the evolution of media convergence.Its fusion processing enables the further mining of data and utilization of data value and broadens the path for the sharing and dissemination of media data.However,it also faces serious problems in terms of protecting user and data privacy.Many privacy protectionmethods have been proposed to solve the problemof privacy leakage during the process of data sharing,but they suffer fromtwo flaws:1)the lack of algorithmic frameworks for specific scenarios such as dynamic datasets in the media domain;2)the inability to solve the problem of the high computational complexity of ciphertext in multi-source data privacy protection,resulting in long encryption and decryption times.In this paper,we propose a multi-source data privacy protection method based on homomorphic encryption and blockchain technology,which solves the privacy protection problem ofmulti-source heterogeneous data in the dissemination ofmedia and reduces ciphertext processing time.We deployed the proposedmethod on theHyperledger platformfor testing and compared it with the privacy protection schemes based on k-anonymity and differential privacy.The experimental results showthat the key generation,encryption,and decryption times of the proposedmethod are lower than those in data privacy protection methods based on k-anonymity technology and differential privacy technology.This significantly reduces the processing time ofmulti-source data,which gives it potential for use in many applications.展开更多
In traditional medicine and ethnomedicine,medicinal plants have long been recognized as the basis for materials in therapeutic applications worldwide.In particular,the remarkable curative effect of traditional Chinese...In traditional medicine and ethnomedicine,medicinal plants have long been recognized as the basis for materials in therapeutic applications worldwide.In particular,the remarkable curative effect of traditional Chinese medicine during corona virus disease 2019(COVID-19)pandemic has attracted extensive attention globally.Medicinal plants have,therefore,become increasingly popular among the public.However,with increasing demand for and profit with medicinal plants,commercial fraudulent events such as adulteration or counterfeits sometimes occur,which poses a serious threat to the clinical outcomes and interests of consumers.With rapid advances in artificial intelligence,machine learning can be used to mine information on various medicinal plants to establish an ideal resource database.We herein present a review that mainly introduces common machine learning algorithms and discusses their application in multi-source data analysis of medicinal plants.The combination of machine learning algorithms and multi-source data analysis facilitates a comprehensive analysis and aids in the effective evaluation of the quality of medicinal plants.The findings of this review provide new possibilities for promoting the development and utilization of medicinal plants.展开更多
Urban functional area(UFA)is a core scientific issue affecting urban sustainability.The current knowledge gap is mainly reflected in the lack of multi-scale quantitative interpretation methods from the perspective of ...Urban functional area(UFA)is a core scientific issue affecting urban sustainability.The current knowledge gap is mainly reflected in the lack of multi-scale quantitative interpretation methods from the perspective of human-land interaction.In this paper,based on multi-source big data include 250 m×250 m resolution cell phone data,1.81×105 Points of Interest(POI)data and administrative boundary data,we built a UFA identification method and demonstrated empirically in Shenyang City,China.We argue that the method we built can effectively identify multi-scale multi-type UFAs based on human activity and further reveal the spatial correlation between urban facilities and human activity.The empirical study suggests that the employment functional zones in Shenyang City are more concentrated in central cities than other single functional zones.There are more mix functional areas in the central city areas,while the planned industrial new cities need to develop comprehensive functions in Shenyang.UFAs have scale effects and human-land interaction patterns.We suggest that city decision makers should apply multi-sources big data to measure urban functional service in a more refined manner from a supply-demand perspective.展开更多
We conducted rapid inversions of rupture process for the 2023 earthquake doublet occurred in SE Türkiye,the first with a magnitude of M_(W)7.8 and the second with a magnitude of M_(W)7.6,using teleseismic and str...We conducted rapid inversions of rupture process for the 2023 earthquake doublet occurred in SE Türkiye,the first with a magnitude of M_(W)7.8 and the second with a magnitude of M_(W)7.6,using teleseismic and strong-motion data.The teleseismic rupture models of the both events were obtained approximately 88 and 55 minutes after their occurrences,respectively.The rupture models indicated that the first event was an asymmetric bilateral event with ruptures mainly propagating to the northeast,while the second one was a unilateral event with ruptures propagating to the west.This information could be useful in locating the meizoseismal areas.Compared with teleseismic models,the strong-motion models showed relatively higher resolution.A noticeable difference was found for the M_(W)7.6 earthquake,for which the strong-motion models shows a bilateral event,rather than a unilateral event,but the dominant rupture direction is still westward.Nevertheless,all strong-motion models are consistent with the teleseismic models in terms of magnitudes,durations,and dominant rupture directions.This suggests that both teleseismic and strong-motion data can be used for fast determination of major source characteristics.In contrast,the strong-motion data would be preferable in future emergency responses since they are recorded earlier and have a better resolution ability on the source ruptures.展开更多
Thousands of lakes on the Tibetan Plateau(TP) play a critical role in the regional water cycle, weather, and climate. In recent years, the areas of TP lakes underwent drastic changes and have become a research hotspot...Thousands of lakes on the Tibetan Plateau(TP) play a critical role in the regional water cycle, weather, and climate. In recent years, the areas of TP lakes underwent drastic changes and have become a research hotspot. However, the characteristics of the lake-atmosphere interaction over the high-altitude lakes are still unclear, which inhibits model development and the accurate simulation of lake climate effects. The source region of the Yellow River(SRYR) has the largest outflow lake and freshwater lake on the TP and is one of the most densely distributed lakes on the TP. Since 2011,three observation sites have been set up in the Ngoring Lake basin in the SRYR to monitor the lake-atmosphere interaction and the differences among water-heat exchanges over the land and lake surfaces. This study presents an eight-year(2012–19), half-hourly, observation-based dataset related to lake–atmosphere interactions composed of three sites. The three sites represent the lake surface, the lakeside, and the land. The observations contain the basic meteorological elements,surface radiation, eddy covariance system, soil temperature, and moisture(for land). Information related to the sites and instruments, the continuity and completeness of data, and the differences among the observational results at different sites are described in this study. These data have been used in the previous study to reveal a few energy and water exchange characteristics of TP lakes and to validate and improve the lake and land surface model. The dataset is available at National Cryosphere Desert Data Center and Science Data Bank.展开更多
A benchmark experiment on^(238)U slab samples was conducted using a deuterium-tritium neutron source at the China Institute of Atomic Energy.The leakage neutron spectra within energy levels of 0.8-16 MeV at 60°an...A benchmark experiment on^(238)U slab samples was conducted using a deuterium-tritium neutron source at the China Institute of Atomic Energy.The leakage neutron spectra within energy levels of 0.8-16 MeV at 60°and 120°were measured using the time-of-flight method.The samples were prepared as rectangular slabs with a 30 cm square base and thicknesses of 3,6,and 9 cm.The leakage neutron spectra were also calculated using the MCNP-4C program based on the latest evaluated files of^(238)U evaluated neutron data from CENDL-3.2,ENDF/B-Ⅷ.0,JENDL-5.0,and JEFF-3.3.Based on the comparison,the deficiencies and improvements in^(238)U evaluated nuclear data were analyzed.The results showed the following.(1)The calculated results for CENDL-3.2 significantly overestimated the measurements in the energy interval of elastic scattering at 60°and 120°.(2)The calculated results of CENDL-3.2 overestimated the measurements in the energy interval of inelastic scattering at 120°.(3)The calculated results for CENDL-3.2 significantly overestimated the measurements in the 3-8.5 MeV energy interval at 60°and 120°.(4)The calculated results with JENDL-5.0 were generally consistent with the measurement results.展开更多
With the popularization of the Internet and the development of technology,cyber threats are increasing day by day.Threats such as malware,hacking,and data breaches have had a serious impact on cybersecurity.The networ...With the popularization of the Internet and the development of technology,cyber threats are increasing day by day.Threats such as malware,hacking,and data breaches have had a serious impact on cybersecurity.The network security environment in the era of big data presents the characteristics of large amounts of data,high diversity,and high real-time requirements.Traditional security defense methods and tools have been unable to cope with the complex and changing network security threats.This paper proposes a machine-learning security defense algorithm based on metadata association features.Emphasize control over unauthorized users through privacy,integrity,and availability.The user model is established and the mapping between the user model and the metadata of the data source is generated.By analyzing the user model and its corresponding mapping relationship,the query of the user model can be decomposed into the query of various heterogeneous data sources,and the integration of heterogeneous data sources based on the metadata association characteristics can be realized.Define and classify customer information,automatically identify and perceive sensitive data,build a behavior audit and analysis platform,analyze user behavior trajectories,and complete the construction of a machine learning customer information security defense system.The experimental results show that when the data volume is 5×103 bit,the data storage integrity of the proposed method is 92%.The data accuracy is 98%,and the success rate of data intrusion is only 2.6%.It can be concluded that the data storage method in this paper is safe,the data accuracy is always at a high level,and the data disaster recovery performance is good.This method can effectively resist data intrusion and has high air traffic control security.It can not only detect all viruses in user data storage,but also realize integrated virus processing,and further optimize the security defense effect of user big data.展开更多
Condensed and hydrolysable tannins are non-toxic natural polyphenols that are a commercial commodity industrialized for tanning hides to obtain leather and for a growing number of other industrial applications mainly ...Condensed and hydrolysable tannins are non-toxic natural polyphenols that are a commercial commodity industrialized for tanning hides to obtain leather and for a growing number of other industrial applications mainly to substitute petroleum-based products.They are a definite class of sustainable materials of the forestry industry.They have been in operation for hundreds of years to manufacture leather and now for a growing number of applications in a variety of other industries,such as wood adhesives,metal coating,pharmaceutical/medical applications and several others.This review presents the main sources,either already or potentially commercial of this forestry by-materials,their industrial and laboratory extraction systems,their systems of analysis with their advantages and drawbacks,be these methods so simple to even appear primitive but nonetheless of proven effectiveness,or very modern and instrumental.It constitutes a basic but essential summary of what is necessary to know of these sustainable materials.In doing so,the review highlights some of the main challenges that remain to be addressed to deliver the quality and economics of tannin supply necessary to fulfill the industrial production requirements for some materials-based uses.展开更多
With the development of Industry 4.0 and big data technology,the Industrial Internet of Things(IIoT)is hampered by inherent issues such as privacy,security,and fault tolerance,which pose certain challenges to the rapi...With the development of Industry 4.0 and big data technology,the Industrial Internet of Things(IIoT)is hampered by inherent issues such as privacy,security,and fault tolerance,which pose certain challenges to the rapid development of IIoT.Blockchain technology has immutability,decentralization,and autonomy,which can greatly improve the inherent defects of the IIoT.In the traditional blockchain,data is stored in a Merkle tree.As data continues to grow,the scale of proofs used to validate it grows,threatening the efficiency,security,and reliability of blockchain-based IIoT.Accordingly,this paper first analyzes the inefficiency of the traditional blockchain structure in verifying the integrity and correctness of data.To solve this problem,a new Vector Commitment(VC)structure,Partition Vector Commitment(PVC),is proposed by improving the traditional VC structure.Secondly,this paper uses PVC instead of the Merkle tree to store big data generated by IIoT.PVC can improve the efficiency of traditional VC in the process of commitment and opening.Finally,this paper uses PVC to build a blockchain-based IIoT data security storage mechanism and carries out a comparative analysis of experiments.This mechanism can greatly reduce communication loss and maximize the rational use of storage space,which is of great significance for maintaining the security and stability of blockchain-based IIoT.展开更多
In order to address the problems of the single encryption algorithm,such as low encryption efficiency and unreliable metadata for static data storage of big data platforms in the cloud computing environment,we propose...In order to address the problems of the single encryption algorithm,such as low encryption efficiency and unreliable metadata for static data storage of big data platforms in the cloud computing environment,we propose a Hadoop based big data secure storage scheme.Firstly,in order to disperse the NameNode service from a single server to multiple servers,we combine HDFS federation and HDFS high-availability mechanisms,and use the Zookeeper distributed coordination mechanism to coordinate each node to achieve dual-channel storage.Then,we improve the ECC encryption algorithm for the encryption of ordinary data,and adopt a homomorphic encryption algorithm to encrypt data that needs to be calculated.To accelerate the encryption,we adopt the dualthread encryption mode.Finally,the HDFS control module is designed to combine the encryption algorithm with the storage model.Experimental results show that the proposed solution solves the problem of a single point of failure of metadata,performs well in terms of metadata reliability,and can realize the fault tolerance of the server.The improved encryption algorithm integrates the dual-channel storage mode,and the encryption storage efficiency improves by 27.6% on average.展开更多
基金National Natural Science Foundation of China under Grant No.61973037China Postdoctoral Science Foundation under Grant No.2022M720419。
文摘Automatic modulation recognition(AMR)of radiation source signals is a research focus in the field of cognitive radio.However,the AMR of radiation source signals at low SNRs still faces a great challenge.Therefore,the AMR method of radiation source signals based on two-dimensional data matrix and improved residual neural network is proposed in this paper.First,the time series of the radiation source signals are reconstructed into two-dimensional data matrix,which greatly simplifies the signal preprocessing process.Second,the depthwise convolution and large-size convolutional kernels based residual neural network(DLRNet)is proposed to improve the feature extraction capability of the AMR model.Finally,the model performs feature extraction and classification on the two-dimensional data matrix to obtain the recognition vector that represents the signal modulation type.Theoretical analysis and simulation results show that the AMR method based on two-dimensional data matrix and improved residual network can significantly improve the accuracy of the AMR method.The recognition accuracy of the proposed method maintains a high level greater than 90% even at -14 dB SNR.
文摘Aeromagnetic data over the Mamfe Basin have been processed. A regional magnetic gridded dataset was obtained from the Total Magnetic Intensity (TMI) data grid using a 3 × 3 convolution (Hanning) filter to remove regional trends. Major similarities in magnetic field orientation and intensities were observed at identical locations on both the regional and TMI data grids. From the regional and TMI gridded datasets, the residual dataset was generated which represents the very shallow geological features of the basin. Processing this residual data grid using the Source Parameter Imaging (SPI) for magnetic depth suggests that the estimated depths to magnetic sources in the basin range from about 271 m to 3552 m. The highest depths are located in two main locations somewhere around the central portion of the study area which correspond to the area with positive magnetic susceptibilities, as well as the areas extending outwards across the eastern boundary of the study area. Shallow magnetic depths are prominent towards the NW portion of the basin and also correspond to areas of negative magnetic susceptibilities. The basin generally exhibits a variation in depth of magnetic sources with high, average and shallow depths. The presence of intrusive igneous rocks was also observed in this basin. This characteristic is a pointer to the existence of geologic resources of interest for exploration in the basin.
基金supported by the National Natural Science Foundation of China(NSFC,grant Nos.42172323 and 12371454)。
文摘In source detection in the Tianlai project,locating the interferometric fringe in visibility data accurately will influence downstream tasks drastically,such as physical parameter estimation and weak source exploration.Considering that traditional locating methods are time-consuming and supervised methods require a great quantity of expensive labeled data,in this paper,we first investigate characteristics of interferometric fringes in the simulation and real scenario separately,and integrate an almost parameter-free unsupervised clustering method and seeding filling or eraser algorithm to propose a hierarchical plug and play method to improve location accuracy.Then,we apply our method to locate single and multiple sources’interferometric fringes in simulation data.Next,we apply our method to real data taken from the Tianlai radio telescope array.Finally,we compare with unsupervised methods that are state of the art.These results show that our method has robustness in different scenarios and can improve location measurement accuracy effectively.
基金The Deanship of Scientific Research at Hashemite University partially funds this workDeanship of Scientific Research at the Northern Border University,Arar,KSA for funding this research work through the project number“NBU-FFR-2024-1580-08”.
文摘Cloud Datacenter Network(CDN)providers usually have the option to scale their network structures to allow for far more resource capacities,though such scaling options may come with exponential costs that contradict their utility objectives.Yet,besides the cost of the physical assets and network resources,such scaling may also imposemore loads on the electricity power grids to feed the added nodes with the required energy to run and cool,which comes with extra costs too.Thus,those CDNproviders who utilize their resources better can certainly afford their services at lower price-units when compared to others who simply choose the scaling solutions.Resource utilization is a quite challenging process;indeed,clients of CDNs usually tend to exaggerate their true resource requirements when they lease their resources.Service providers are committed to their clients with Service Level Agreements(SLAs).Therefore,any amendment to the resource allocations needs to be approved by the clients first.In this work,we propose deploying a Stackelberg leadership framework to formulate a negotiation game between the cloud service providers and their client tenants.Through this,the providers seek to retrieve those leased unused resources from their clients.Cooperation is not expected from the clients,and they may ask high price units to return their extra resources to the provider’s premises.Hence,to motivate cooperation in such a non-cooperative game,as an extension to theVickery auctions,we developed an incentive-compatible pricingmodel for the returned resources.Moreover,we also proposed building a behavior belief function that shapes the way of negotiation and compensation for each client.Compared to other benchmark models,the assessment results showthat our proposed models provide for timely negotiation schemes,allowing for better resource utilization rates,higher utilities,and grid-friend CDNs.
基金supported by the National Key Research and Development Program of China(grant number 2019YFE0123600)。
文摘The power Internet of Things(IoT)is a significant trend in technology and a requirement for national strategic development.With the deepening digital transformation of the power grid,China’s power system has initially built a power IoT architecture comprising a perception,network,and platform application layer.However,owing to the structural complexity of the power system,the construction of the power IoT continues to face problems such as complex access management of massive heterogeneous equipment,diverse IoT protocol access methods,high concurrency of network communications,and weak data security protection.To address these issues,this study optimizes the existing architecture of the power IoT and designs an integrated management framework for the access of multi-source heterogeneous data in the power IoT,comprising cloud,pipe,edge,and terminal parts.It further reviews and analyzes the key technologies involved in the power IoT,such as the unified management of the physical model,high concurrent access,multi-protocol access,multi-source heterogeneous data storage management,and data security control,to provide a more flexible,efficient,secure,and easy-to-use solution for multi-source heterogeneous data access in the power IoT.
文摘Urban geography has always been concerned about the influence of human settlements on urban vitality,but few studies reveal the influence of human settlements on urban vitality at a micro-scale.This paper analyzes the spatial distribution characteristics of human settlements’quality and urban vitality at the micro-scale using Geodetectors and geographic weighted regression to analyze the relationship between human settlements and urban vitality.The results are shown as follows:there is still a significant development space for human settlements quality in Shahekou District,with obvious spatial dependence characteristics and significant gaps between various systems;the urban vitality of Shahekou District has obvious timeliness,and the urban vitality undergoes significant changes over time,which is related to the human settlements quality.The spatial distribution presents a single core spatial distribution structure with strong relative stability.The spatial distribution of cold and hot spots shows a pattern of“high in the north and low in the south,high in the east and low in the west”,with an increasing trend from southwest to northeast;the reachability of public transport has a significant impact on urban vitality.Its synergy with other variables is the leading force forming the spatial distribution of urban vitality.The environmental system,support system and social system are the significant factors affecting the urban vitality of Shahekou District.
基金supported by the National Natural Science Foundation of China(41977215)。
文摘Long runout landslides involve a massive amount of energy and can be extremely hazardous owing to their long movement distance,high mobility and strong destructive power.Numerical methods have been widely used to predict the landslide runout but a fundamental problem remained is how to determine the reliable numerical parameters.This study proposes a framework to predict the runout of potential landslides through multi-source data collaboration and numerical analysis of historical landslide events.Specifically,for the historical landslide cases,the landslide-induced seismic signal,geophysical surveys,and possible in-situ drone/phone videos(multi-source data collaboration)can validate the numerical results in terms of landslide dynamics and deposit features and help calibrate the numerical(rheological)parameters.Subsequently,the calibrated numerical parameters can be used to numerically predict the runout of potential landslides in the region with a similar geological setting to the recorded events.Application of the runout prediction approach to the 2020 Jiashanying landslide in Guizhou,China gives reasonable results in comparison to the field observations.The numerical parameters are determined from the multi-source data collaboration analysis of a historical case in the region(2019 Shuicheng landslide).The proposed framework for landslide runout prediction can be of great utility for landslide risk assessment and disaster reduction in mountainous regions worldwide.
文摘In the 21st century,with the development of the Internet,mobile devices,and information technology,society has entered a new era:the era of big data.With the help of big data technology,enterprises can obtain massive market and consumer data,realize in-depth analysis of business and market,and enable enterprises to have a deeper understanding of consumer needs,preferences,and behaviors.At the same time,big data technology can also help enterprises carry out human resource management innovation and improve the performance and competitiveness of enterprises.Of course,from another perspective,enterprises in this era are also facing severe challenges.In the face of massive data processing and analysis,it requires superb data processing and analysis capabilities.Secondly,enterprises need to reconstruct their management system to adapt to the changes in the era of big data.Enterprises must treat data as assets and establish a perfect data management system.In addition,enterprises also need to pay attention to protecting customer privacy and data security to avoid data leakage and abuse.In this context,this paper will explore the thinking of enterprise human resource management innovation in the era of big data,and put forward some suggestions on enterprise human resource management innovation.
文摘With the continuous development of big data technology,the digital transformation of enterprise human resource management has become a development trend.Human resources is one of the most important resources of enterprises,which is crucial to the competitiveness of enterprises.Enterprises need to attract,retain,and motivate excellent employees,thereby enhancing the innovation ability of enterprises and improving competitiveness and market share in the market.To maintain advantages in the fierce market competition,enterprises need to adopt more scientific and effective human resource management methods to enhance organizational efficiency and competitiveness.At the same time,this paper analyzes the dilemma faced by enterprise human resource management,points out the new characteristics of enterprise human resource management enabled by big data,and puts forward feasible suggestions for enterprise digital transformation.
基金funded by the High-Quality and Cutting-Edge Discipline Construction Project for Universities in Beijing (Internet Information,Communication University of China).
文摘Multi-Source data plays an important role in the evolution of media convergence.Its fusion processing enables the further mining of data and utilization of data value and broadens the path for the sharing and dissemination of media data.However,it also faces serious problems in terms of protecting user and data privacy.Many privacy protectionmethods have been proposed to solve the problemof privacy leakage during the process of data sharing,but they suffer fromtwo flaws:1)the lack of algorithmic frameworks for specific scenarios such as dynamic datasets in the media domain;2)the inability to solve the problem of the high computational complexity of ciphertext in multi-source data privacy protection,resulting in long encryption and decryption times.In this paper,we propose a multi-source data privacy protection method based on homomorphic encryption and blockchain technology,which solves the privacy protection problem ofmulti-source heterogeneous data in the dissemination ofmedia and reduces ciphertext processing time.We deployed the proposedmethod on theHyperledger platformfor testing and compared it with the privacy protection schemes based on k-anonymity and differential privacy.The experimental results showthat the key generation,encryption,and decryption times of the proposedmethod are lower than those in data privacy protection methods based on k-anonymity technology and differential privacy technology.This significantly reduces the processing time ofmulti-source data,which gives it potential for use in many applications.
基金supported by the National Natural Science Foundation of China(Grant No.:U2202213)the Special Program for the Major Science and Technology Projects of Yunnan Province,China(Grant Nos.:202102AE090051-1-01,and 202202AE090001).
文摘In traditional medicine and ethnomedicine,medicinal plants have long been recognized as the basis for materials in therapeutic applications worldwide.In particular,the remarkable curative effect of traditional Chinese medicine during corona virus disease 2019(COVID-19)pandemic has attracted extensive attention globally.Medicinal plants have,therefore,become increasingly popular among the public.However,with increasing demand for and profit with medicinal plants,commercial fraudulent events such as adulteration or counterfeits sometimes occur,which poses a serious threat to the clinical outcomes and interests of consumers.With rapid advances in artificial intelligence,machine learning can be used to mine information on various medicinal plants to establish an ideal resource database.We herein present a review that mainly introduces common machine learning algorithms and discusses their application in multi-source data analysis of medicinal plants.The combination of machine learning algorithms and multi-source data analysis facilitates a comprehensive analysis and aids in the effective evaluation of the quality of medicinal plants.The findings of this review provide new possibilities for promoting the development and utilization of medicinal plants.
基金Under the auspices of Natural Science Foundation of China(No.41971166)。
文摘Urban functional area(UFA)is a core scientific issue affecting urban sustainability.The current knowledge gap is mainly reflected in the lack of multi-scale quantitative interpretation methods from the perspective of human-land interaction.In this paper,based on multi-source big data include 250 m×250 m resolution cell phone data,1.81×105 Points of Interest(POI)data and administrative boundary data,we built a UFA identification method and demonstrated empirically in Shenyang City,China.We argue that the method we built can effectively identify multi-scale multi-type UFAs based on human activity and further reveal the spatial correlation between urban facilities and human activity.The empirical study suggests that the employment functional zones in Shenyang City are more concentrated in central cities than other single functional zones.There are more mix functional areas in the central city areas,while the planned industrial new cities need to develop comprehensive functions in Shenyang.UFAs have scale effects and human-land interaction patterns.We suggest that city decision makers should apply multi-sources big data to measure urban functional service in a more refined manner from a supply-demand perspective.
基金supported by the National Key Research and Development Program of China(2022YFF0800603).
文摘We conducted rapid inversions of rupture process for the 2023 earthquake doublet occurred in SE Türkiye,the first with a magnitude of M_(W)7.8 and the second with a magnitude of M_(W)7.6,using teleseismic and strong-motion data.The teleseismic rupture models of the both events were obtained approximately 88 and 55 minutes after their occurrences,respectively.The rupture models indicated that the first event was an asymmetric bilateral event with ruptures mainly propagating to the northeast,while the second one was a unilateral event with ruptures propagating to the west.This information could be useful in locating the meizoseismal areas.Compared with teleseismic models,the strong-motion models showed relatively higher resolution.A noticeable difference was found for the M_(W)7.6 earthquake,for which the strong-motion models shows a bilateral event,rather than a unilateral event,but the dominant rupture direction is still westward.Nevertheless,all strong-motion models are consistent with the teleseismic models in terms of magnitudes,durations,and dominant rupture directions.This suggests that both teleseismic and strong-motion data can be used for fast determination of major source characteristics.In contrast,the strong-motion data would be preferable in future emergency responses since they are recorded earlier and have a better resolution ability on the source ruptures.
基金supported by the National Natural Science Foundations of China (Grant Nos. 41930759, 41822501, 42075089, 41975014)the 2nd Scientific Expedition to the Qinghai-Tibet Plateau (2019QZKK0102)+3 种基金The Science and Technology Research Plan of Gansu Province (20JR10RA070)the Chinese Academy of Youth Innovation and Promotion, CAS (Y201874)the Youth Innovation Promotion Association CAS (QCH2019004)iLEAPs (Integrated Land Ecosystem-Atmosphere Processes Study-iLEAPS)。
文摘Thousands of lakes on the Tibetan Plateau(TP) play a critical role in the regional water cycle, weather, and climate. In recent years, the areas of TP lakes underwent drastic changes and have become a research hotspot. However, the characteristics of the lake-atmosphere interaction over the high-altitude lakes are still unclear, which inhibits model development and the accurate simulation of lake climate effects. The source region of the Yellow River(SRYR) has the largest outflow lake and freshwater lake on the TP and is one of the most densely distributed lakes on the TP. Since 2011,three observation sites have been set up in the Ngoring Lake basin in the SRYR to monitor the lake-atmosphere interaction and the differences among water-heat exchanges over the land and lake surfaces. This study presents an eight-year(2012–19), half-hourly, observation-based dataset related to lake–atmosphere interactions composed of three sites. The three sites represent the lake surface, the lakeside, and the land. The observations contain the basic meteorological elements,surface radiation, eddy covariance system, soil temperature, and moisture(for land). Information related to the sites and instruments, the continuity and completeness of data, and the differences among the observational results at different sites are described in this study. These data have been used in the previous study to reveal a few energy and water exchange characteristics of TP lakes and to validate and improve the lake and land surface model. The dataset is available at National Cryosphere Desert Data Center and Science Data Bank.
基金This work was supported by the general program(No.1177531)joint funding(No.U2067205)from the National Natural Science Foundation of China.
文摘A benchmark experiment on^(238)U slab samples was conducted using a deuterium-tritium neutron source at the China Institute of Atomic Energy.The leakage neutron spectra within energy levels of 0.8-16 MeV at 60°and 120°were measured using the time-of-flight method.The samples were prepared as rectangular slabs with a 30 cm square base and thicknesses of 3,6,and 9 cm.The leakage neutron spectra were also calculated using the MCNP-4C program based on the latest evaluated files of^(238)U evaluated neutron data from CENDL-3.2,ENDF/B-Ⅷ.0,JENDL-5.0,and JEFF-3.3.Based on the comparison,the deficiencies and improvements in^(238)U evaluated nuclear data were analyzed.The results showed the following.(1)The calculated results for CENDL-3.2 significantly overestimated the measurements in the energy interval of elastic scattering at 60°and 120°.(2)The calculated results of CENDL-3.2 overestimated the measurements in the energy interval of inelastic scattering at 120°.(3)The calculated results for CENDL-3.2 significantly overestimated the measurements in the 3-8.5 MeV energy interval at 60°and 120°.(4)The calculated results with JENDL-5.0 were generally consistent with the measurement results.
基金This work was supported by the National Natural Science Foundation of China(U2133208,U20A20161).
文摘With the popularization of the Internet and the development of technology,cyber threats are increasing day by day.Threats such as malware,hacking,and data breaches have had a serious impact on cybersecurity.The network security environment in the era of big data presents the characteristics of large amounts of data,high diversity,and high real-time requirements.Traditional security defense methods and tools have been unable to cope with the complex and changing network security threats.This paper proposes a machine-learning security defense algorithm based on metadata association features.Emphasize control over unauthorized users through privacy,integrity,and availability.The user model is established and the mapping between the user model and the metadata of the data source is generated.By analyzing the user model and its corresponding mapping relationship,the query of the user model can be decomposed into the query of various heterogeneous data sources,and the integration of heterogeneous data sources based on the metadata association characteristics can be realized.Define and classify customer information,automatically identify and perceive sensitive data,build a behavior audit and analysis platform,analyze user behavior trajectories,and complete the construction of a machine learning customer information security defense system.The experimental results show that when the data volume is 5×103 bit,the data storage integrity of the proposed method is 92%.The data accuracy is 98%,and the success rate of data intrusion is only 2.6%.It can be concluded that the data storage method in this paper is safe,the data accuracy is always at a high level,and the data disaster recovery performance is good.This method can effectively resist data intrusion and has high air traffic control security.It can not only detect all viruses in user data storage,but also realize integrated virus processing,and further optimize the security defense effect of user big data.
文摘Condensed and hydrolysable tannins are non-toxic natural polyphenols that are a commercial commodity industrialized for tanning hides to obtain leather and for a growing number of other industrial applications mainly to substitute petroleum-based products.They are a definite class of sustainable materials of the forestry industry.They have been in operation for hundreds of years to manufacture leather and now for a growing number of applications in a variety of other industries,such as wood adhesives,metal coating,pharmaceutical/medical applications and several others.This review presents the main sources,either already or potentially commercial of this forestry by-materials,their industrial and laboratory extraction systems,their systems of analysis with their advantages and drawbacks,be these methods so simple to even appear primitive but nonetheless of proven effectiveness,or very modern and instrumental.It constitutes a basic but essential summary of what is necessary to know of these sustainable materials.In doing so,the review highlights some of the main challenges that remain to be addressed to deliver the quality and economics of tannin supply necessary to fulfill the industrial production requirements for some materials-based uses.
基金supported by China’s National Natural Science Foundation(Nos.62072249,62072056)This work is also funded by the National Science Foundation of Hunan Province(2020JJ2029).
文摘With the development of Industry 4.0 and big data technology,the Industrial Internet of Things(IIoT)is hampered by inherent issues such as privacy,security,and fault tolerance,which pose certain challenges to the rapid development of IIoT.Blockchain technology has immutability,decentralization,and autonomy,which can greatly improve the inherent defects of the IIoT.In the traditional blockchain,data is stored in a Merkle tree.As data continues to grow,the scale of proofs used to validate it grows,threatening the efficiency,security,and reliability of blockchain-based IIoT.Accordingly,this paper first analyzes the inefficiency of the traditional blockchain structure in verifying the integrity and correctness of data.To solve this problem,a new Vector Commitment(VC)structure,Partition Vector Commitment(PVC),is proposed by improving the traditional VC structure.Secondly,this paper uses PVC instead of the Merkle tree to store big data generated by IIoT.PVC can improve the efficiency of traditional VC in the process of commitment and opening.Finally,this paper uses PVC to build a blockchain-based IIoT data security storage mechanism and carries out a comparative analysis of experiments.This mechanism can greatly reduce communication loss and maximize the rational use of storage space,which is of great significance for maintaining the security and stability of blockchain-based IIoT.
文摘In order to address the problems of the single encryption algorithm,such as low encryption efficiency and unreliable metadata for static data storage of big data platforms in the cloud computing environment,we propose a Hadoop based big data secure storage scheme.Firstly,in order to disperse the NameNode service from a single server to multiple servers,we combine HDFS federation and HDFS high-availability mechanisms,and use the Zookeeper distributed coordination mechanism to coordinate each node to achieve dual-channel storage.Then,we improve the ECC encryption algorithm for the encryption of ordinary data,and adopt a homomorphic encryption algorithm to encrypt data that needs to be calculated.To accelerate the encryption,we adopt the dualthread encryption mode.Finally,the HDFS control module is designed to combine the encryption algorithm with the storage model.Experimental results show that the proposed solution solves the problem of a single point of failure of metadata,performs well in terms of metadata reliability,and can realize the fault tolerance of the server.The improved encryption algorithm integrates the dual-channel storage mode,and the encryption storage efficiency improves by 27.6% on average.