Applications of the multivariate technique called correspondence analysis for environmental studies are relatively new and are limited to spatial multivariate data set. In this paper, a procedure of applying correspon...Applications of the multivariate technique called correspondence analysis for environmental studies are relatively new and are limited to spatial multivariate data set. In this paper, a procedure of applying correspondence analysis to a large space-time data set for multiple environmental variables is shown. In particular, nitrogen dioxide and carbon monoxide hourly concentrations measured during January 1999 at several monitored stations in a district of Northern Italy are analyzed. The procedure consists in transforming the continuous variables into categorical ones by the means of appropriate indicator variables, generating special contingency tables and applying correspondence analysis. The use of this classical multivariate technique allows the identification of important relationships among pollution levels and monitoring stations and/or relationships among pollution levels and observation times.展开更多
In this paper,we mainly discuss a discrete estimation of the average differential entropy for a continuous time-stationary ergodic space-time random field.By estimating the probability value of a time-stationary rando...In this paper,we mainly discuss a discrete estimation of the average differential entropy for a continuous time-stationary ergodic space-time random field.By estimating the probability value of a time-stationary random field in a small range,we give an entropy estimation and obtain the average entropy estimation formula in a certain bounded space region.It can be proven that the estimation of the average differential entropy converges to the theoretical value with a probability of 1.In addition,we also conducted numerical experiments for different parameters to verify the convergence result obtained in the theoretical proofs.展开更多
Underwater monopulse space-time adaptive track-before-detect method,which combines space-time adaptive detector(STAD)and the track-before-detect algorithm based on dynamic programming(DP-TBD),denoted as STAD-DP-TBD,ca...Underwater monopulse space-time adaptive track-before-detect method,which combines space-time adaptive detector(STAD)and the track-before-detect algorithm based on dynamic programming(DP-TBD),denoted as STAD-DP-TBD,can effectively detect low-speed weak targets.However,due to the complexity and variability of the underwater environment,it is difficult to obtain sufficient secondary data,resulting in a serious decline in the detection and tracking performance,and leading to poor robustness of the algorithm.In this paper,based on the adaptive matched filter(AMF)test and the RAO test,underwater monopulse AMF-DP-TBD algorithm and RAO-DP-TBD algorithm which incorporate persymmetry and symmetric spectrum,denoted as PSAMF-DP-TBD and PS-RAO-DP-TBD,are proposed and compared with the AMF-DP-TBD algorithm and RAO-DP-TBD algorithm based on persymmetry array,denoted as P-AMF-DP-TBD and P-RAO-DP-TBD.The simulation results show that the four methods can work normally with sufficient secondary data and slightly insufficient secondary data,but when the secondary data is severely insufficient,the P-AMF-DP-TBD and P-RAO-DP-TBD algorithms has failed while the PSAMF-DP-TBD and PS-RAO-DP-TBD algorithms still have good detection and tracking capabilities.展开更多
This paper presents a physically plausible and somewhat illuminating first step in extending the fundamental principles of mechanical stress and strain to space-time. Here the geometry of space-time, encoded in the me...This paper presents a physically plausible and somewhat illuminating first step in extending the fundamental principles of mechanical stress and strain to space-time. Here the geometry of space-time, encoded in the metric tensor, is considered to be made up of a dynamic lattice of extremely small, localized fields that form a perfectly elastic Lorentz symmetric space-time at the global (macroscopic) scale. This theoretical model of space-time at the Planck scale leads to a somewhat surprising result in which matter waves in curved space-time radiate thermal gravitational energy, as well as an equally intriguing relationship for the anomalous dispersion of light in a gravitational field.展开更多
The emerging virtual coupling technology aims to operate multiple train units in a Virtually Coupled Train Set(VCTS)at a minimal but safe distance.To guarantee collision avoidance,the safety distance should be calcula...The emerging virtual coupling technology aims to operate multiple train units in a Virtually Coupled Train Set(VCTS)at a minimal but safe distance.To guarantee collision avoidance,the safety distance should be calculated using the state-of-the-art space-time separation principle that separates the Emergency Braking(EB)trajectories of two successive units during the whole EB process.In this case,the minimal safety distance is usually numerically calculated without an analytic formulation.Thus,the constrained VCTS control problem is hard to address with space-time separation,which is still a gap in the existing literature.To solve this problem,we propose a Distributed Economic Model Predictive Control(DEMPC)approach with computation efficiency and theoretical guarantee.Specifically,to alleviate the computation burden,we transform implicit safety constraints into explicitly linear ones,such that the optimal control problem in DEMPC is a quadratic programming problem that can be solved efficiently.For theoretical analysis,sufficient conditions are derived to guarantee the recursive feasibility and stability of DEMPC,employing compatibility constraints,tube techniques and terminal ingredient tuning.Moreover,we extend our approach with globally optimal and distributed online EB configuration methods to shorten the minimal distance among VCTS.Finally,experimental results demonstrate the performance and advantages of the proposed approaches.展开更多
In non-homogeneous environment, traditional space-time adaptive processing doesn't effectively suppress interference and detect target, because the secondary data don' t exactly reflect the statistical characteristi...In non-homogeneous environment, traditional space-time adaptive processing doesn't effectively suppress interference and detect target, because the secondary data don' t exactly reflect the statistical characteristic of the range cell under test. A ravel methodology utilizing the direct data domain approach to space-time adaptive processing ( STAP ) in airbome radar non-homogeneous environments is presented. The deterministic least squares adaptive signal processing technique operates on a "snapshot-by-snapshot" basis to dethrone the adaptive adaptive weights for nulling interferences and estimating signal of interest (SOI). Furthermore, this approach eliminates the requirement for estimating the covariance through the data of neighboring range cell, which eliminates calculating the inverse of covariance, and can be implemented to operate in real-time. Simulation results illustrate the efficiency of interference suppression in non-homogeneous environment.展开更多
Getting insight into the spatiotemporal distribution patterns of knowledge innovation is receiving increasing attention from policymakers and economic research organizations.Many studies use bibliometric data to analy...Getting insight into the spatiotemporal distribution patterns of knowledge innovation is receiving increasing attention from policymakers and economic research organizations.Many studies use bibliometric data to analyze the popularity of certain research topics,well-adopted methodologies,influential authors,and the interrelationships among research disciplines.However,the visual exploration of the patterns of research topics with an emphasis on their spatial and temporal distribution remains challenging.This study combined a Space-Time Cube(STC)and a 3D glyph to represent the complex multivariate bibliographic data.We further implemented a visual design by developing an interactive interface.The effectiveness,understandability,and engagement of ST-Map are evaluated by seven experts in geovisualization.The results suggest that it is promising to use three-dimensional visualization to show the overview and on-demand details on a single screen.展开更多
This paper presents an extension of certain forms of the real Paley-Wiener theorems to the Minkowski space-time algebra. Our emphasis is dedicated to determining the space-time valued functions whose space-time Fourie...This paper presents an extension of certain forms of the real Paley-Wiener theorems to the Minkowski space-time algebra. Our emphasis is dedicated to determining the space-time valued functions whose space-time Fourier transforms(SFT) have compact support using the partial derivatives operator and the Dirac operator of higher order.展开更多
With the development of Industry 4.0 and big data technology,the Industrial Internet of Things(IIoT)is hampered by inherent issues such as privacy,security,and fault tolerance,which pose certain challenges to the rapi...With the development of Industry 4.0 and big data technology,the Industrial Internet of Things(IIoT)is hampered by inherent issues such as privacy,security,and fault tolerance,which pose certain challenges to the rapid development of IIoT.Blockchain technology has immutability,decentralization,and autonomy,which can greatly improve the inherent defects of the IIoT.In the traditional blockchain,data is stored in a Merkle tree.As data continues to grow,the scale of proofs used to validate it grows,threatening the efficiency,security,and reliability of blockchain-based IIoT.Accordingly,this paper first analyzes the inefficiency of the traditional blockchain structure in verifying the integrity and correctness of data.To solve this problem,a new Vector Commitment(VC)structure,Partition Vector Commitment(PVC),is proposed by improving the traditional VC structure.Secondly,this paper uses PVC instead of the Merkle tree to store big data generated by IIoT.PVC can improve the efficiency of traditional VC in the process of commitment and opening.Finally,this paper uses PVC to build a blockchain-based IIoT data security storage mechanism and carries out a comparative analysis of experiments.This mechanism can greatly reduce communication loss and maximize the rational use of storage space,which is of great significance for maintaining the security and stability of blockchain-based IIoT.展开更多
In order to address the problems of the single encryption algorithm,such as low encryption efficiency and unreliable metadata for static data storage of big data platforms in the cloud computing environment,we propose...In order to address the problems of the single encryption algorithm,such as low encryption efficiency and unreliable metadata for static data storage of big data platforms in the cloud computing environment,we propose a Hadoop based big data secure storage scheme.Firstly,in order to disperse the NameNode service from a single server to multiple servers,we combine HDFS federation and HDFS high-availability mechanisms,and use the Zookeeper distributed coordination mechanism to coordinate each node to achieve dual-channel storage.Then,we improve the ECC encryption algorithm for the encryption of ordinary data,and adopt a homomorphic encryption algorithm to encrypt data that needs to be calculated.To accelerate the encryption,we adopt the dualthread encryption mode.Finally,the HDFS control module is designed to combine the encryption algorithm with the storage model.Experimental results show that the proposed solution solves the problem of a single point of failure of metadata,performs well in terms of metadata reliability,and can realize the fault tolerance of the server.The improved encryption algorithm integrates the dual-channel storage mode,and the encryption storage efficiency improves by 27.6% on average.展开更多
Time-series data provide important information in many fields,and their processing and analysis have been the focus of much research.However,detecting anomalies is very difficult due to data imbalance,temporal depende...Time-series data provide important information in many fields,and their processing and analysis have been the focus of much research.However,detecting anomalies is very difficult due to data imbalance,temporal dependence,and noise.Therefore,methodologies for data augmentation and conversion of time series data into images for analysis have been studied.This paper proposes a fault detection model that uses time series data augmentation and transformation to address the problems of data imbalance,temporal dependence,and robustness to noise.The method of data augmentation is set as the addition of noise.It involves adding Gaussian noise,with the noise level set to 0.002,to maximize the generalization performance of the model.In addition,we use the Markov Transition Field(MTF)method to effectively visualize the dynamic transitions of the data while converting the time series data into images.It enables the identification of patterns in time series data and assists in capturing the sequential dependencies of the data.For anomaly detection,the PatchCore model is applied to show excellent performance,and the detected anomaly areas are represented as heat maps.It allows for the detection of anomalies,and by applying an anomaly map to the original image,it is possible to capture the areas where anomalies occur.The performance evaluation shows that both F1-score and Accuracy are high when time series data is converted to images.Additionally,when processed as images rather than as time series data,there was a significant reduction in both the size of the data and the training time.The proposed method can provide an important springboard for research in the field of anomaly detection using time series data.Besides,it helps solve problems such as analyzing complex patterns in data lightweight.展开更多
We consider the two-point,two-time(space-time)correlation of passive scalar R(r,τ)in the Kraichnan model under the assumption of homogeneity and isotropy.Using the fine-gird PDF method,we find that R(r,τ)satisfies a...We consider the two-point,two-time(space-time)correlation of passive scalar R(r,τ)in the Kraichnan model under the assumption of homogeneity and isotropy.Using the fine-gird PDF method,we find that R(r,τ)satisfies a diffusion equation with constant diffusion coefficient determined by velocity variance and molecular diffusion.Itssolution can be expressed in terms of the two-point,one time correlation of passive scalar,i.e.,R(r,0).Moreover,the decorrelation o R(k,τ),which is the Fourier transform of R(r,τ),is determined byR(k,0)and a diffusion kernal.展开更多
Assimilating satellite radiances into Numerical Weather Prediction(NWP) models has become an important approach to increase the accuracy of numerical weather forecasting. In this study, the assimilation technique sche...Assimilating satellite radiances into Numerical Weather Prediction(NWP) models has become an important approach to increase the accuracy of numerical weather forecasting. In this study, the assimilation technique scheme was employed in NOAA's STMAS(Space-Time Multiscale Analysis System) to assimilate AMSU-A radiances data.Channel selection sensitivity experiments were conducted on assimilated satellite data in the first place. Then, real case analysis of AMSU-A data assimilation was performed. The analysis results showed that, following assimilating of AMSU-A channels 5-11 in STMAS, the objective function quickly converged, and the channel vertical response was consistent with the AMSU-A weighting function distribution, which suggests that the channels can be used in the assimilation of satellite data in STMAS. With the case of the Typhoon Morakot in Taiwan Island in August 2009 as an example, experiments on assimilated and unassimilated AMSU-A radiances data were designed to analyze the impact of the assimilation of satellite data on STMAS. The results demonstrated that assimilation of AMSU-A data provided more accurate prediction of the precipitation region and intensity, and especially, it improved the 0-6h precipitation forecast significantly.展开更多
Mg alloys possess an inherent plastic anisotropy owing to the selective activation of deformation mechanisms depending on the loading condition.This characteristic results in a diverse range of flow curves that vary w...Mg alloys possess an inherent plastic anisotropy owing to the selective activation of deformation mechanisms depending on the loading condition.This characteristic results in a diverse range of flow curves that vary with a deformation condition.This study proposes a novel approach for accurately predicting an anisotropic deformation behavior of wrought Mg alloys using machine learning(ML)with data augmentation.The developed model combines four key strategies from data science:learning the entire flow curves,generative adversarial networks(GAN),algorithm-driven hyperparameter tuning,and gated recurrent unit(GRU)architecture.The proposed model,namely GAN-aided GRU,was extensively evaluated for various predictive scenarios,such as interpolation,extrapolation,and a limited dataset size.The model exhibited significant predictability and improved generalizability for estimating the anisotropic compressive behavior of ZK60 Mg alloys under 11 annealing conditions and for three loading directions.The GAN-aided GRU results were superior to those of previous ML models and constitutive equations.The superior performance was attributed to hyperparameter optimization,GAN-based data augmentation,and the inherent predictivity of the GRU for extrapolation.As a first attempt to employ ML techniques other than artificial neural networks,this study proposes a novel perspective on predicting the anisotropic deformation behaviors of wrought Mg alloys.展开更多
There are challenges to the reliability evaluation for insulated gate bipolar transistors(IGBT)on electric vehicles,such as junction temperature measurement,computational and storage resources.In this paper,a junction...There are challenges to the reliability evaluation for insulated gate bipolar transistors(IGBT)on electric vehicles,such as junction temperature measurement,computational and storage resources.In this paper,a junction temperature estimation approach based on neural network without additional cost is proposed and the lifetime calculation for IGBT using electric vehicle big data is performed.The direct current(DC)voltage,operation current,switching frequency,negative thermal coefficient thermistor(NTC)temperature and IGBT lifetime are inputs.And the junction temperature(T_(j))is output.With the rain flow counting method,the classified irregular temperatures are brought into the life model for the failure cycles.The fatigue accumulation method is then used to calculate the IGBT lifetime.To solve the limited computational and storage resources of electric vehicle controllers,the operation of IGBT lifetime calculation is running on a big data platform.The lifetime is then transmitted wirelessly to electric vehicles as input for neural network.Thus the junction temperature of IGBT under long-term operating conditions can be accurately estimated.A test platform of the motor controller combined with the vehicle big data server is built for the IGBT accelerated aging test.Subsequently,the IGBT lifetime predictions are derived from the junction temperature estimation by the neural network method and the thermal network method.The experiment shows that the lifetime prediction based on a neural network with big data demonstrates a higher accuracy than that of the thermal network,which improves the reliability evaluation of system.展开更多
As the risks associated with air turbulence are intensified by climate change and the growth of the aviation industry,it has become imperative to monitor and mitigate these threats to ensure civil aviation safety.The ...As the risks associated with air turbulence are intensified by climate change and the growth of the aviation industry,it has become imperative to monitor and mitigate these threats to ensure civil aviation safety.The eddy dissipation rate(EDR)has been established as the standard metric for quantifying turbulence in civil aviation.This study aims to explore a universally applicable symbolic classification approach based on genetic programming to detect turbulence anomalies using quick access recorder(QAR)data.The detection of atmospheric turbulence is approached as an anomaly detection problem.Comparative evaluations demonstrate that this approach performs on par with direct EDR calculation methods in identifying turbulence events.Moreover,comparisons with alternative machine learning techniques indicate that the proposed technique is the optimal methodology currently available.In summary,the use of symbolic classification via genetic programming enables accurate turbulence detection from QAR data,comparable to that with established EDR approaches and surpassing that achieved with machine learning algorithms.This finding highlights the potential of integrating symbolic classifiers into turbulence monitoring systems to enhance civil aviation safety amidst rising environmental and operational hazards.展开更多
Since the impoundment of Three Gorges Reservoir(TGR)in 2003,numerous slopes have experienced noticeable movement or destabilization owing to reservoir level changes and seasonal rainfall.One case is the Outang landsli...Since the impoundment of Three Gorges Reservoir(TGR)in 2003,numerous slopes have experienced noticeable movement or destabilization owing to reservoir level changes and seasonal rainfall.One case is the Outang landslide,a large-scale and active landslide,on the south bank of the Yangtze River.The latest monitoring data and site investigations available are analyzed to establish spatial and temporal landslide deformation characteristics.Data mining technology,including the two-step clustering and Apriori algorithm,is then used to identify the dominant triggers of landslide movement.In the data mining process,the two-step clustering method clusters the candidate triggers and displacement rate into several groups,and the Apriori algorithm generates correlation criteria for the cause-and-effect.The analysis considers multiple locations of the landslide and incorporates two types of time scales:longterm deformation on a monthly basis and short-term deformation on a daily basis.This analysis shows that the deformations of the Outang landslide are driven by both rainfall and reservoir water while its deformation varies spatiotemporally mainly due to the difference in local responses to hydrological factors.The data mining results reveal different dominant triggering factors depending on the monitoring frequency:the monthly and bi-monthly cumulative rainfall control the monthly deformation,and the 10-d cumulative rainfall and the 5-d cumulative drop of water level in the reservoir dominate the daily deformation of the landslide.It is concluded that the spatiotemporal deformation pattern and data mining rules associated with precipitation and reservoir water level have the potential to be broadly implemented for improving landslide prevention and control in the dam reservoirs and other landslideprone areas.展开更多
A benchmark experiment on^(238)U slab samples was conducted using a deuterium-tritium neutron source at the China Institute of Atomic Energy.The leakage neutron spectra within energy levels of 0.8-16 MeV at 60°an...A benchmark experiment on^(238)U slab samples was conducted using a deuterium-tritium neutron source at the China Institute of Atomic Energy.The leakage neutron spectra within energy levels of 0.8-16 MeV at 60°and 120°were measured using the time-of-flight method.The samples were prepared as rectangular slabs with a 30 cm square base and thicknesses of 3,6,and 9 cm.The leakage neutron spectra were also calculated using the MCNP-4C program based on the latest evaluated files of^(238)U evaluated neutron data from CENDL-3.2,ENDF/B-Ⅷ.0,JENDL-5.0,and JEFF-3.3.Based on the comparison,the deficiencies and improvements in^(238)U evaluated nuclear data were analyzed.The results showed the following.(1)The calculated results for CENDL-3.2 significantly overestimated the measurements in the energy interval of elastic scattering at 60°and 120°.(2)The calculated results of CENDL-3.2 overestimated the measurements in the energy interval of inelastic scattering at 120°.(3)The calculated results for CENDL-3.2 significantly overestimated the measurements in the 3-8.5 MeV energy interval at 60°and 120°.(4)The calculated results with JENDL-5.0 were generally consistent with the measurement results.展开更多
When building a classification model,the scenario where the samples of one class are significantly more than those of the other class is called data imbalance.Data imbalance causes the trained classification model to ...When building a classification model,the scenario where the samples of one class are significantly more than those of the other class is called data imbalance.Data imbalance causes the trained classification model to be in favor of the majority class(usually defined as the negative class),which may do harm to the accuracy of the minority class(usually defined as the positive class),and then lead to poor overall performance of the model.A method called MSHR-FCSSVM for solving imbalanced data classification is proposed in this article,which is based on a new hybrid resampling approach(MSHR)and a new fine cost-sensitive support vector machine(CS-SVM)classifier(FCSSVM).The MSHR measures the separability of each negative sample through its Silhouette value calculated by Mahalanobis distance between samples,based on which,the so-called pseudo-negative samples are screened out to generate new positive samples(over-sampling step)through linear interpolation and are deleted finally(under-sampling step).This approach replaces pseudo-negative samples with generated new positive samples one by one to clear up the inter-class overlap on the borderline,without changing the overall scale of the dataset.The FCSSVM is an improved version of the traditional CS-SVM.It considers influences of both the imbalance of sample number and the class distribution on classification simultaneously,and through finely tuning the class cost weights by using the efficient optimization algorithm based on the physical phenomenon of rime-ice(RIME)algorithm with cross-validation accuracy as the fitness function to accurately adjust the classification borderline.To verify the effectiveness of the proposed method,a series of experiments are carried out based on 20 imbalanced datasets including both mildly and extremely imbalanced datasets.The experimental results show that the MSHR-FCSSVM method performs better than the methods for comparison in most cases,and both the MSHR and the FCSSVM played significant roles.展开更多
文摘Applications of the multivariate technique called correspondence analysis for environmental studies are relatively new and are limited to spatial multivariate data set. In this paper, a procedure of applying correspondence analysis to a large space-time data set for multiple environmental variables is shown. In particular, nitrogen dioxide and carbon monoxide hourly concentrations measured during January 1999 at several monitored stations in a district of Northern Italy are analyzed. The procedure consists in transforming the continuous variables into categorical ones by the means of appropriate indicator variables, generating special contingency tables and applying correspondence analysis. The use of this classical multivariate technique allows the identification of important relationships among pollution levels and monitoring stations and/or relationships among pollution levels and observation times.
基金supported by the Shenzhen sustainable development project:KCXFZ 20201221173013036 and the National Natural Science Foundation of China(91746107).
文摘In this paper,we mainly discuss a discrete estimation of the average differential entropy for a continuous time-stationary ergodic space-time random field.By estimating the probability value of a time-stationary random field in a small range,we give an entropy estimation and obtain the average entropy estimation formula in a certain bounded space region.It can be proven that the estimation of the average differential entropy converges to the theoretical value with a probability of 1.In addition,we also conducted numerical experiments for different parameters to verify the convergence result obtained in the theoretical proofs.
基金supported by the National Natural Science Foundation of China (No.61971412)。
文摘Underwater monopulse space-time adaptive track-before-detect method,which combines space-time adaptive detector(STAD)and the track-before-detect algorithm based on dynamic programming(DP-TBD),denoted as STAD-DP-TBD,can effectively detect low-speed weak targets.However,due to the complexity and variability of the underwater environment,it is difficult to obtain sufficient secondary data,resulting in a serious decline in the detection and tracking performance,and leading to poor robustness of the algorithm.In this paper,based on the adaptive matched filter(AMF)test and the RAO test,underwater monopulse AMF-DP-TBD algorithm and RAO-DP-TBD algorithm which incorporate persymmetry and symmetric spectrum,denoted as PSAMF-DP-TBD and PS-RAO-DP-TBD,are proposed and compared with the AMF-DP-TBD algorithm and RAO-DP-TBD algorithm based on persymmetry array,denoted as P-AMF-DP-TBD and P-RAO-DP-TBD.The simulation results show that the four methods can work normally with sufficient secondary data and slightly insufficient secondary data,but when the secondary data is severely insufficient,the P-AMF-DP-TBD and P-RAO-DP-TBD algorithms has failed while the PSAMF-DP-TBD and PS-RAO-DP-TBD algorithms still have good detection and tracking capabilities.
文摘This paper presents a physically plausible and somewhat illuminating first step in extending the fundamental principles of mechanical stress and strain to space-time. Here the geometry of space-time, encoded in the metric tensor, is considered to be made up of a dynamic lattice of extremely small, localized fields that form a perfectly elastic Lorentz symmetric space-time at the global (macroscopic) scale. This theoretical model of space-time at the Planck scale leads to a somewhat surprising result in which matter waves in curved space-time radiate thermal gravitational energy, as well as an equally intriguing relationship for the anomalous dispersion of light in a gravitational field.
基金supported by the National Natural Science Foundation of China(52372310)the State Key Laboratory of Advanced Rail Autonomous Operation(RAO2023ZZ001)+1 种基金the Fundamental Research Funds for the Central Universities(2022JBQY001)Beijing Laboratory of Urban Rail Transit.
文摘The emerging virtual coupling technology aims to operate multiple train units in a Virtually Coupled Train Set(VCTS)at a minimal but safe distance.To guarantee collision avoidance,the safety distance should be calculated using the state-of-the-art space-time separation principle that separates the Emergency Braking(EB)trajectories of two successive units during the whole EB process.In this case,the minimal safety distance is usually numerically calculated without an analytic formulation.Thus,the constrained VCTS control problem is hard to address with space-time separation,which is still a gap in the existing literature.To solve this problem,we propose a Distributed Economic Model Predictive Control(DEMPC)approach with computation efficiency and theoretical guarantee.Specifically,to alleviate the computation burden,we transform implicit safety constraints into explicitly linear ones,such that the optimal control problem in DEMPC is a quadratic programming problem that can be solved efficiently.For theoretical analysis,sufficient conditions are derived to guarantee the recursive feasibility and stability of DEMPC,employing compatibility constraints,tube techniques and terminal ingredient tuning.Moreover,we extend our approach with globally optimal and distributed online EB configuration methods to shorten the minimal distance among VCTS.Finally,experimental results demonstrate the performance and advantages of the proposed approaches.
文摘In non-homogeneous environment, traditional space-time adaptive processing doesn't effectively suppress interference and detect target, because the secondary data don' t exactly reflect the statistical characteristic of the range cell under test. A ravel methodology utilizing the direct data domain approach to space-time adaptive processing ( STAP ) in airbome radar non-homogeneous environments is presented. The deterministic least squares adaptive signal processing technique operates on a "snapshot-by-snapshot" basis to dethrone the adaptive adaptive weights for nulling interferences and estimating signal of interest (SOI). Furthermore, this approach eliminates the requirement for estimating the covariance through the data of neighboring range cell, which eliminates calculating the inverse of covariance, and can be implemented to operate in real-time. Simulation results illustrate the efficiency of interference suppression in non-homogeneous environment.
文摘Getting insight into the spatiotemporal distribution patterns of knowledge innovation is receiving increasing attention from policymakers and economic research organizations.Many studies use bibliometric data to analyze the popularity of certain research topics,well-adopted methodologies,influential authors,and the interrelationships among research disciplines.However,the visual exploration of the patterns of research topics with an emphasis on their spatial and temporal distribution remains challenging.This study combined a Space-Time Cube(STC)and a 3D glyph to represent the complex multivariate bibliographic data.We further implemented a visual design by developing an interactive interface.The effectiveness,understandability,and engagement of ST-Map are evaluated by seven experts in geovisualization.The results suggest that it is promising to use three-dimensional visualization to show the overview and on-demand details on a single screen.
基金supported by the Deanship of Scientific Research at King Khalid University,Saudi Arabia (R.G.P.1/207/43)。
文摘This paper presents an extension of certain forms of the real Paley-Wiener theorems to the Minkowski space-time algebra. Our emphasis is dedicated to determining the space-time valued functions whose space-time Fourier transforms(SFT) have compact support using the partial derivatives operator and the Dirac operator of higher order.
基金supported by China’s National Natural Science Foundation(Nos.62072249,62072056)This work is also funded by the National Science Foundation of Hunan Province(2020JJ2029).
文摘With the development of Industry 4.0 and big data technology,the Industrial Internet of Things(IIoT)is hampered by inherent issues such as privacy,security,and fault tolerance,which pose certain challenges to the rapid development of IIoT.Blockchain technology has immutability,decentralization,and autonomy,which can greatly improve the inherent defects of the IIoT.In the traditional blockchain,data is stored in a Merkle tree.As data continues to grow,the scale of proofs used to validate it grows,threatening the efficiency,security,and reliability of blockchain-based IIoT.Accordingly,this paper first analyzes the inefficiency of the traditional blockchain structure in verifying the integrity and correctness of data.To solve this problem,a new Vector Commitment(VC)structure,Partition Vector Commitment(PVC),is proposed by improving the traditional VC structure.Secondly,this paper uses PVC instead of the Merkle tree to store big data generated by IIoT.PVC can improve the efficiency of traditional VC in the process of commitment and opening.Finally,this paper uses PVC to build a blockchain-based IIoT data security storage mechanism and carries out a comparative analysis of experiments.This mechanism can greatly reduce communication loss and maximize the rational use of storage space,which is of great significance for maintaining the security and stability of blockchain-based IIoT.
文摘In order to address the problems of the single encryption algorithm,such as low encryption efficiency and unreliable metadata for static data storage of big data platforms in the cloud computing environment,we propose a Hadoop based big data secure storage scheme.Firstly,in order to disperse the NameNode service from a single server to multiple servers,we combine HDFS federation and HDFS high-availability mechanisms,and use the Zookeeper distributed coordination mechanism to coordinate each node to achieve dual-channel storage.Then,we improve the ECC encryption algorithm for the encryption of ordinary data,and adopt a homomorphic encryption algorithm to encrypt data that needs to be calculated.To accelerate the encryption,we adopt the dualthread encryption mode.Finally,the HDFS control module is designed to combine the encryption algorithm with the storage model.Experimental results show that the proposed solution solves the problem of a single point of failure of metadata,performs well in terms of metadata reliability,and can realize the fault tolerance of the server.The improved encryption algorithm integrates the dual-channel storage mode,and the encryption storage efficiency improves by 27.6% on average.
基金This research was financially supported by the Ministry of Trade,Industry,and Energy(MOTIE),Korea,under the“Project for Research and Development with Middle Markets Enterprises and DNA(Data,Network,AI)Universities”(AI-based Safety Assessment and Management System for Concrete Structures)(ReferenceNumber P0024559)supervised by theKorea Institute for Advancement of Technology(KIAT).
文摘Time-series data provide important information in many fields,and their processing and analysis have been the focus of much research.However,detecting anomalies is very difficult due to data imbalance,temporal dependence,and noise.Therefore,methodologies for data augmentation and conversion of time series data into images for analysis have been studied.This paper proposes a fault detection model that uses time series data augmentation and transformation to address the problems of data imbalance,temporal dependence,and robustness to noise.The method of data augmentation is set as the addition of noise.It involves adding Gaussian noise,with the noise level set to 0.002,to maximize the generalization performance of the model.In addition,we use the Markov Transition Field(MTF)method to effectively visualize the dynamic transitions of the data while converting the time series data into images.It enables the identification of patterns in time series data and assists in capturing the sequential dependencies of the data.For anomaly detection,the PatchCore model is applied to show excellent performance,and the detected anomaly areas are represented as heat maps.It allows for the detection of anomalies,and by applying an anomaly map to the original image,it is possible to capture the areas where anomalies occur.The performance evaluation shows that both F1-score and Accuracy are high when time series data is converted to images.Additionally,when processed as images rather than as time series data,there was a significant reduction in both the size of the data and the training time.The proposed method can provide an important springboard for research in the field of anomaly detection using time series data.Besides,it helps solve problems such as analyzing complex patterns in data lightweight.
基金supported by the National Natural Science Foun-dation of China(NSFC)Basic Science Center Program for“Multiscale Problems in Nonlinear Mechanics”(Grant No.11988102).
文摘We consider the two-point,two-time(space-time)correlation of passive scalar R(r,τ)in the Kraichnan model under the assumption of homogeneity and isotropy.Using the fine-gird PDF method,we find that R(r,τ)satisfies a diffusion equation with constant diffusion coefficient determined by velocity variance and molecular diffusion.Itssolution can be expressed in terms of the two-point,one time correlation of passive scalar,i.e.,R(r,0).Moreover,the decorrelation o R(k,τ),which is the Fourier transform of R(r,τ),is determined byR(k,0)and a diffusion kernal.
基金National Natural Science Foundation of China(41375027,41130960,41275114,41275039)Public Benefit Research Foundation of China Meteorological Administration(GYHY201406001,GYHY201106044)+1 种基金"863"Program(2012AA120903)National Key Research and Development Program of China(2016YFB0502501)
文摘Assimilating satellite radiances into Numerical Weather Prediction(NWP) models has become an important approach to increase the accuracy of numerical weather forecasting. In this study, the assimilation technique scheme was employed in NOAA's STMAS(Space-Time Multiscale Analysis System) to assimilate AMSU-A radiances data.Channel selection sensitivity experiments were conducted on assimilated satellite data in the first place. Then, real case analysis of AMSU-A data assimilation was performed. The analysis results showed that, following assimilating of AMSU-A channels 5-11 in STMAS, the objective function quickly converged, and the channel vertical response was consistent with the AMSU-A weighting function distribution, which suggests that the channels can be used in the assimilation of satellite data in STMAS. With the case of the Typhoon Morakot in Taiwan Island in August 2009 as an example, experiments on assimilated and unassimilated AMSU-A radiances data were designed to analyze the impact of the assimilation of satellite data on STMAS. The results demonstrated that assimilation of AMSU-A data provided more accurate prediction of the precipitation region and intensity, and especially, it improved the 0-6h precipitation forecast significantly.
基金Korea Institute of Energy Technology Evaluation and Planning(KETEP)grant funded by the Korea government(Grant No.20214000000140,Graduate School of Convergence for Clean Energy Integrated Power Generation)Korea Basic Science Institute(National Research Facilities and Equipment Center)grant funded by the Ministry of Education(2021R1A6C101A449)the National Research Foundation of Korea grant funded by the Ministry of Science and ICT(2021R1A2C1095139),Republic of Korea。
文摘Mg alloys possess an inherent plastic anisotropy owing to the selective activation of deformation mechanisms depending on the loading condition.This characteristic results in a diverse range of flow curves that vary with a deformation condition.This study proposes a novel approach for accurately predicting an anisotropic deformation behavior of wrought Mg alloys using machine learning(ML)with data augmentation.The developed model combines four key strategies from data science:learning the entire flow curves,generative adversarial networks(GAN),algorithm-driven hyperparameter tuning,and gated recurrent unit(GRU)architecture.The proposed model,namely GAN-aided GRU,was extensively evaluated for various predictive scenarios,such as interpolation,extrapolation,and a limited dataset size.The model exhibited significant predictability and improved generalizability for estimating the anisotropic compressive behavior of ZK60 Mg alloys under 11 annealing conditions and for three loading directions.The GAN-aided GRU results were superior to those of previous ML models and constitutive equations.The superior performance was attributed to hyperparameter optimization,GAN-based data augmentation,and the inherent predictivity of the GRU for extrapolation.As a first attempt to employ ML techniques other than artificial neural networks,this study proposes a novel perspective on predicting the anisotropic deformation behaviors of wrought Mg alloys.
文摘There are challenges to the reliability evaluation for insulated gate bipolar transistors(IGBT)on electric vehicles,such as junction temperature measurement,computational and storage resources.In this paper,a junction temperature estimation approach based on neural network without additional cost is proposed and the lifetime calculation for IGBT using electric vehicle big data is performed.The direct current(DC)voltage,operation current,switching frequency,negative thermal coefficient thermistor(NTC)temperature and IGBT lifetime are inputs.And the junction temperature(T_(j))is output.With the rain flow counting method,the classified irregular temperatures are brought into the life model for the failure cycles.The fatigue accumulation method is then used to calculate the IGBT lifetime.To solve the limited computational and storage resources of electric vehicle controllers,the operation of IGBT lifetime calculation is running on a big data platform.The lifetime is then transmitted wirelessly to electric vehicles as input for neural network.Thus the junction temperature of IGBT under long-term operating conditions can be accurately estimated.A test platform of the motor controller combined with the vehicle big data server is built for the IGBT accelerated aging test.Subsequently,the IGBT lifetime predictions are derived from the junction temperature estimation by the neural network method and the thermal network method.The experiment shows that the lifetime prediction based on a neural network with big data demonstrates a higher accuracy than that of the thermal network,which improves the reliability evaluation of system.
基金supported by the Meteorological Soft Science Project(Grant No.2023ZZXM29)the Natural Science Fund Project of Tianjin,China(Grant No.21JCYBJC00740)the Key Research and Development-Social Development Program of Jiangsu Province,China(Grant No.BE2021685).
文摘As the risks associated with air turbulence are intensified by climate change and the growth of the aviation industry,it has become imperative to monitor and mitigate these threats to ensure civil aviation safety.The eddy dissipation rate(EDR)has been established as the standard metric for quantifying turbulence in civil aviation.This study aims to explore a universally applicable symbolic classification approach based on genetic programming to detect turbulence anomalies using quick access recorder(QAR)data.The detection of atmospheric turbulence is approached as an anomaly detection problem.Comparative evaluations demonstrate that this approach performs on par with direct EDR calculation methods in identifying turbulence events.Moreover,comparisons with alternative machine learning techniques indicate that the proposed technique is the optimal methodology currently available.In summary,the use of symbolic classification via genetic programming enables accurate turbulence detection from QAR data,comparable to that with established EDR approaches and surpassing that achieved with machine learning algorithms.This finding highlights the potential of integrating symbolic classifiers into turbulence monitoring systems to enhance civil aviation safety amidst rising environmental and operational hazards.
基金supported by the Natural Science Foundation of Shandong Province,China(Grant No.ZR2021QD032)。
文摘Since the impoundment of Three Gorges Reservoir(TGR)in 2003,numerous slopes have experienced noticeable movement or destabilization owing to reservoir level changes and seasonal rainfall.One case is the Outang landslide,a large-scale and active landslide,on the south bank of the Yangtze River.The latest monitoring data and site investigations available are analyzed to establish spatial and temporal landslide deformation characteristics.Data mining technology,including the two-step clustering and Apriori algorithm,is then used to identify the dominant triggers of landslide movement.In the data mining process,the two-step clustering method clusters the candidate triggers and displacement rate into several groups,and the Apriori algorithm generates correlation criteria for the cause-and-effect.The analysis considers multiple locations of the landslide and incorporates two types of time scales:longterm deformation on a monthly basis and short-term deformation on a daily basis.This analysis shows that the deformations of the Outang landslide are driven by both rainfall and reservoir water while its deformation varies spatiotemporally mainly due to the difference in local responses to hydrological factors.The data mining results reveal different dominant triggering factors depending on the monitoring frequency:the monthly and bi-monthly cumulative rainfall control the monthly deformation,and the 10-d cumulative rainfall and the 5-d cumulative drop of water level in the reservoir dominate the daily deformation of the landslide.It is concluded that the spatiotemporal deformation pattern and data mining rules associated with precipitation and reservoir water level have the potential to be broadly implemented for improving landslide prevention and control in the dam reservoirs and other landslideprone areas.
基金This work was supported by the general program(No.1177531)joint funding(No.U2067205)from the National Natural Science Foundation of China.
文摘A benchmark experiment on^(238)U slab samples was conducted using a deuterium-tritium neutron source at the China Institute of Atomic Energy.The leakage neutron spectra within energy levels of 0.8-16 MeV at 60°and 120°were measured using the time-of-flight method.The samples were prepared as rectangular slabs with a 30 cm square base and thicknesses of 3,6,and 9 cm.The leakage neutron spectra were also calculated using the MCNP-4C program based on the latest evaluated files of^(238)U evaluated neutron data from CENDL-3.2,ENDF/B-Ⅷ.0,JENDL-5.0,and JEFF-3.3.Based on the comparison,the deficiencies and improvements in^(238)U evaluated nuclear data were analyzed.The results showed the following.(1)The calculated results for CENDL-3.2 significantly overestimated the measurements in the energy interval of elastic scattering at 60°and 120°.(2)The calculated results of CENDL-3.2 overestimated the measurements in the energy interval of inelastic scattering at 120°.(3)The calculated results for CENDL-3.2 significantly overestimated the measurements in the 3-8.5 MeV energy interval at 60°and 120°.(4)The calculated results with JENDL-5.0 were generally consistent with the measurement results.
基金supported by the Yunnan Major Scientific and Technological Projects(Grant No.202302AD080001)the National Natural Science Foundation,China(No.52065033).
文摘When building a classification model,the scenario where the samples of one class are significantly more than those of the other class is called data imbalance.Data imbalance causes the trained classification model to be in favor of the majority class(usually defined as the negative class),which may do harm to the accuracy of the minority class(usually defined as the positive class),and then lead to poor overall performance of the model.A method called MSHR-FCSSVM for solving imbalanced data classification is proposed in this article,which is based on a new hybrid resampling approach(MSHR)and a new fine cost-sensitive support vector machine(CS-SVM)classifier(FCSSVM).The MSHR measures the separability of each negative sample through its Silhouette value calculated by Mahalanobis distance between samples,based on which,the so-called pseudo-negative samples are screened out to generate new positive samples(over-sampling step)through linear interpolation and are deleted finally(under-sampling step).This approach replaces pseudo-negative samples with generated new positive samples one by one to clear up the inter-class overlap on the borderline,without changing the overall scale of the dataset.The FCSSVM is an improved version of the traditional CS-SVM.It considers influences of both the imbalance of sample number and the class distribution on classification simultaneously,and through finely tuning the class cost weights by using the efficient optimization algorithm based on the physical phenomenon of rime-ice(RIME)algorithm with cross-validation accuracy as the fitness function to accurately adjust the classification borderline.To verify the effectiveness of the proposed method,a series of experiments are carried out based on 20 imbalanced datasets including both mildly and extremely imbalanced datasets.The experimental results show that the MSHR-FCSSVM method performs better than the methods for comparison in most cases,and both the MSHR and the FCSSVM played significant roles.