Cable-stayed bridges have been widely used in high-speed railway infrastructure.The accurate determination of cable’s representative temperatures is vital during the intricate processes of design,construction,and mai...Cable-stayed bridges have been widely used in high-speed railway infrastructure.The accurate determination of cable’s representative temperatures is vital during the intricate processes of design,construction,and maintenance of cable-stayed bridges.However,the representative temperatures of stayed cables are not specified in the existing design codes.To address this issue,this study investigates the distribution of the cable temperature and determinates its representative temperature.First,an experimental investigation,spanning over a period of one year,was carried out near the bridge site to obtain the temperature data.According to the statistical analysis of the measured data,it reveals that the temperature distribution is generally uniform along the cable cross-section without significant temperature gradient.Then,based on the limited data,the Monte Carlo,the gradient boosted regression trees(GBRT),and univariate linear regression(ULR)methods are employed to predict the cable’s representative temperature throughout the service life.These methods effectively overcome the limitations of insufficient monitoring data and accurately predict the representative temperature of the cables.However,each method has its own advantages and limitations in terms of applicability and accuracy.A comprehensive evaluation of the performance of these methods is conducted,and practical recommendations are provided for their application.The proposed methods and representative temperatures provide a good basis for the operation and maintenance of in-service long-span cable-stayed bridges.展开更多
An accurate long-term energy demand forecasting is essential for energy planning and policy making. However, due to the immature energy data collecting and statistical methods, the available data are usually limited i...An accurate long-term energy demand forecasting is essential for energy planning and policy making. However, due to the immature energy data collecting and statistical methods, the available data are usually limited in many regions. In this paper, on the basis of comprehensive literature review, we proposed a hybrid model based on the long-range alternative energy planning (LEAP) model to improve the accuracy of energy demand forecasting in these regions. By taking Hunan province, China as a typical case, the proposed hybrid model was applied to estimating the possible future energy demand and energy-saving potentials in different sectors. The structure of LEAP model was estimated by Sankey energy flow, and Leslie matrix and autoregressive integrated moving average (ARIMA) models were used to predict the population, industrial structure and transportation turnover, respectively. Monte-Carlo method was employed to evaluate the uncertainty of forecasted results. The results showed that the hybrid model combined with scenario analysis provided a relatively accurate forecast for the long-term energy demand in regions with limited statistical data, and the average standard error of probabilistic distribution in 2030 energy demand was as low as 0.15. The prediction results could provide supportive references to identify energy-saving potentials and energy development pathways.展开更多
The Swiss Agency for Development and Cooperation (SDC) has funded the Rural Water and Sanitation Support Programme (RWSSP) that has increased the access to public water supply throughout Europe’s youngest state—Kos...The Swiss Agency for Development and Cooperation (SDC) has funded the Rural Water and Sanitation Support Programme (RWSSP) that has increased the access to public water supply throughout Europe’s youngest state—Kosovo—in the past ten years. The Programme, implemented by Dorsch International Consultants GmbH and Community Development Initiatives has, among other activities, implemented groundwater protection methods. Nevertheless, groundwater protection remains a challenge in Kosovo. The water law describes that water source protection is similar to German rules, yet modelling-based planning of water source protection zones remains challenging. In the present study, the development of the hydrogeological and the mathematical groundwater model for the technical delineation of the well head protection area for the Ferizaj well fields under limited data availability is described in detail. The study shows that even when not all data are available, it is possible and necessary to use mathematical groundwater models to delineate well head protection areas.展开更多
Traditional global sensitivity analysis(GSA)neglects the epistemic uncertainties associated with the probabilistic characteristics(i.e.type of distribution type and its parameters)of input rock properties emanating du...Traditional global sensitivity analysis(GSA)neglects the epistemic uncertainties associated with the probabilistic characteristics(i.e.type of distribution type and its parameters)of input rock properties emanating due to the small size of datasets while mapping the relative importance of properties to the model response.This paper proposes an augmented Bayesian multi-model inference(BMMI)coupled with GSA methodology(BMMI-GSA)to address this issue by estimating the imprecision in the momentindependent sensitivity indices of rock structures arising from the small size of input data.The methodology employs BMMI to quantify the epistemic uncertainties associated with model type and parameters of input properties.The estimated uncertainties are propagated in estimating imprecision in moment-independent Borgonovo’s indices by employing a reweighting approach on candidate probabilistic models.The proposed methodology is showcased for a rock slope prone to stress-controlled failure in the Himalayan region of India.The proposed methodology was superior to the conventional GSA(neglects all epistemic uncertainties)and Bayesian coupled GSA(B-GSA)(neglects model uncertainty)due to its capability to incorporate the uncertainties in both model type and parameters of properties.Imprecise Borgonovo’s indices estimated via proposed methodology provide the confidence intervals of the sensitivity indices instead of their fixed-point estimates,which makes the user more informed in the data collection efforts.Analyses performed with the varying sample sizes suggested that the uncertainties in sensitivity indices reduce significantly with the increasing sample sizes.The accurate importance ranking of properties was only possible via samples of large sizes.Further,the impact of the prior knowledge in terms of prior ranges and distributions was significant;hence,any related assumption should be made carefully.展开更多
Natural mortality rate(M) is one of the essential parameters in fishery stock assessment, however, the estimation of M is commonly rough and the changes of M due to natural and anthropogenic impacts have long been i...Natural mortality rate(M) is one of the essential parameters in fishery stock assessment, however, the estimation of M is commonly rough and the changes of M due to natural and anthropogenic impacts have long been ignored.The simplification of M estimation and the influence of M variations on the assessment and management of fisheries stocks have been less well understood. This study evaluated the impacts of the changes in natural mortality of Spanish mackerel(Scomberomorus niphonius) on their management strategies with data-limited methods. We tested the performances of a variety of management procedures(MPs) with the variations of M in mackerel stock using diverse estimation methods. The results of management strategies evaluation showed that four management procedures DCAC, SPMSY, cur E75 and minlen Lopt1 were more robust to the changes of M than others; however, their performance were substantially influenced by the significant decrease of M from the 1970s to 2017. Relative population biomass(measure as the probability of B〉0.5 BMSY) increased significantly with the decrease of M, whereas the possibility of overfishing showed remarkable variations across MPs. The decrease of M had minor effects on the long-term yield of cur E75 and minlen Lopt1, and reduced the fluctuation of yield(measure as the probability of AAVY〈15%) for DCAC, SPMSY. In general, the different methods for M estimation showed minor effects on the performance of MPs, whereas the temporal changes of M showed substantial influences. Considering the fishery status of Spanish mackerel in China, we recommended that cur E75 has the best trade-off between fishery resources exploitation and conservation, and we also proposed the potentials and issues in their implementations.展开更多
The manufacturing of composite structures is a highly complex task with inevitable risks, particularly associated with aleatoric and epistemic uncertainty of both the materials and processes, as well as the need for &...The manufacturing of composite structures is a highly complex task with inevitable risks, particularly associated with aleatoric and epistemic uncertainty of both the materials and processes, as well as the need for <i>in-situ</i> decision-making to mitigate defects during manufacturing. In the context of aerospace composites production in particular, there is a heightened impetus to address and reduce this risk. Current qualification and substantiation frameworks within the aerospace industry define tractable methods for risk reduction. In parallel, Industry 4.0 is an emerging set of technologies and tools that can enable better decision-making towards risk reduction, supported by data-driven models. It offers new paradigms for manufacturers, by virtue of enabling <i>in-situ</i> decisions for optimizing the process as a dynamic system. However, the static nature of current (pre-Industry 4.0) best-practice frameworks may be viewed as at odds with this emerging novel approach. In addition, many of the predictive tools leveraged in an Industry 4.0 system are black-box in nature, which presents other concerns of tractability, interpretability and ultimately risk. This article presents a perspective on the current state-of-the-art in the aerospace composites industry focusing on risk reduction in the autoclave processing, as an example system, while reviewing current trends and needs towards a Composites 4.0 future.展开更多
In recent years,deep learning algorithms have been popular in recognizing targets in synthetic aperture radar(SAR)images.However,due to the problem of overfitting,the performance of these models tends to worsen when j...In recent years,deep learning algorithms have been popular in recognizing targets in synthetic aperture radar(SAR)images.However,due to the problem of overfitting,the performance of these models tends to worsen when just a small number of training data are available.In order to solve the problems of overfitting and an unsatisfied performance of the network model in the small sample remote sensing image target recognition,in this paper,we uses a deep residual network to autonomously acquire image features and proposes the Deep Feature Bayesian Classifier model(RBnet)for SAR image target recognition.In the RBnet,a Bayesian classifier is used to improve the effect of SAR image target recognition and improve the accuracy when the training data is limited.The experimental results on MSTAR dataset show that the RBnet can fully exploit effective information in limited samples and recognize the target of the SAR images more accurately.Compared with other state-of-the-art methods,our method offers significant recognition accuracy improvements under limited training data.Noted that theRBnet is moderately difficult to implement and has the value of popularization and application in engineering application scenarios in the field of small-sample remote sensing target recognition and recognition.展开更多
Sparse-view tomography has many applications such as in low-dose computed tomography(CT).Using undersampled data,a perfect image is not expected.The goal of this paper is to obtain a tomographic image that is better ...Sparse-view tomography has many applications such as in low-dose computed tomography(CT).Using undersampled data,a perfect image is not expected.The goal of this paper is to obtain a tomographic image that is better than the naïve filtered backprojection(FBP)reconstruction that uses linear interpolation to complete the measurements.This paper proposes a method to estimate the un-measured projections by displacement function interpolation.Displacement function estimation is a non-linear procedure and the linear interpolation is performed on the displacement function(instead of,on the sinogram itself).As a result,the estimated measurements are not the linear transformation of the measured data.The proposed method is compared with the linear interpolation methods,and the proposed method shows superior performance.展开更多
For random vibration of airborne platform, the accurate evaluation is a key indicator to ensure normal operation of airborne equipment in flight. However, only limited power spectral density(PSD) data can be obtaine...For random vibration of airborne platform, the accurate evaluation is a key indicator to ensure normal operation of airborne equipment in flight. However, only limited power spectral density(PSD) data can be obtained at the stage of flight test. Thus, those conventional evaluation methods cannot be employed when the distribution characteristics and priori information are unknown. In this paper, the fuzzy norm method(FNM) is proposed which combines the advantages of fuzzy theory and norm theory. The proposed method can deeply dig system information from limited data, which probability distribution is not taken into account. Firstly, the FNM is employed to evaluate variable interval and expanded uncertainty from limited PSD data, and the performance of FNM is demonstrated by confidence level, reliability and computing accuracy of expanded uncertainty. In addition, the optimal fuzzy parameters are discussed to meet the requirements of aviation standards and metrological practice. Finally, computer simulation is used to prove the adaptability of FNM. Compared with statistical methods, FNM has superiority for evaluating expanded uncertainty from limited data. The results show that the reliability of calculation and evaluation is superior to 95%.展开更多
Predicting the external flow field with limited data or limited measurements has attracted long-time interests of researchers in many industrial applications.Physics informed neural network(PINN)provides a seamless fr...Predicting the external flow field with limited data or limited measurements has attracted long-time interests of researchers in many industrial applications.Physics informed neural network(PINN)provides a seamless framework for combining the measured data with the deep neural network,making the neural network capable of executing certain physical constraints.Unlike the data-driven model to learn the end-to-end mapping between the sensor data and high-dimensional flow field,PINN need no prior high-dimensional field as the training dataset and can construct the mapping from sensor data to high dimensional flow field directly.However,the extrapolation of the flow field in the temporal direction is limited due to the lack of training data.Therefore,we apply the long short-term memory(LSTM)network and physics-informed neural network(PINN)to predict the flow field and hydrodynamic force in the future temporal domain with limited data measured in the spatial domain.The physical constraints(conservation laws of fluid flow,e.g.,Navier-Stokes equations)are embedded into the loss function to enforce the trained neural network to capture some latent physical relation between the output fluid parameters and input tempo-spatial parameters.The sparsely measured points in this work are obtained from computational fluid dynamics(CFD)solver based on the local radial basis function(RBF)method.Different numbers of spatial measured points(4–35)downstream the cylinder are trained with/without the prior knowledge of Reynolds number to validate the availability and accuracy of the proposed approach.More practical applications of flow field prediction can compute the drag and lift force along with the cylinder,while different geometry shapes are taken into account.By comparing the flow field reconstruction and force prediction with CFD results,the proposed approach produces a comparable level of accuracy while significantly fewer data in the spatial domain is needed.The numerical results demonstrate that the proposed approach with a specific deep neural network configuration is of great potential for emerging cases where the measured data are often limited.展开更多
Coherent pulse stacking(CPS) is a new time-domain coherent addition technique that stacks several optical pulses into a single output pulse, enabling high pulse energy and high average power. A Z-domain model target...Coherent pulse stacking(CPS) is a new time-domain coherent addition technique that stacks several optical pulses into a single output pulse, enabling high pulse energy and high average power. A Z-domain model targeting the pulsed laser is assembled to describe the optical interference process. An algorithm, extracting the cavity phase and pulse phases from limited data, where only the pulse intensity is available, is developed to diagnose optical cavity resonators. We also implement the algorithm on the cascaded system of multiple optical cavities,achieving phase errors less than 1.0°(root mean square), which could ensure the stability of CPS.展开更多
To investigate the influence of real leading-edge manufacturing error on aerodynamic performance of high subsonic compressor blades,a family of leading-edge manufacturing error data were obtained from measured compres...To investigate the influence of real leading-edge manufacturing error on aerodynamic performance of high subsonic compressor blades,a family of leading-edge manufacturing error data were obtained from measured compressor cascades.Considering the limited samples,the leadingedge angle and leading-edge radius distribution forms were evaluated by Shapiro-Wilk test and quantile–quantile plot.Their statistical characteristics provided can be introduced to later related researches.The parameterization design method B-spline and Bezier are adopted to create geometry models with manufacturing error based on leading-edge angle and leading-edge radius.The influence of real manufacturing error is quantified and analyzed by self-developed non-intrusive polynomial chaos and Sobol’indices.The mechanism of leading-edge manufacturing error on aerodynamic performance is discussed.The results show that the total pressure loss coefficient is sensitive to the leading-edge manufacturing error compared with the static pressure ratio,especially at high incidence.Specifically,manufacturing error of the leading edge will influence the local flow acceleration and subsequently cause fluctuation of the downstream flow.The aerodynamic performance is sensitive to the manufacturing error of leading-edge radius at the design and negative incidences,while it is sensitive to the manufacturing error of leading-edge angle under the operation conditions with high incidences.展开更多
Traffic flow prediction plays an important role in intelligent transportation applications,such as traffic control,navigation,path planning,etc.,which are closely related to people's daily life.In the last twenty ...Traffic flow prediction plays an important role in intelligent transportation applications,such as traffic control,navigation,path planning,etc.,which are closely related to people's daily life.In the last twenty years,many traffic flow prediction approaches have been proposed.However,some of these approaches use the regression based mechanisms,which cannot achieve accurate short-term traffic flow predication.While,other approaches use the neural network based mechanisms,which cannot work well with limited amount of training data.To this end,a light weight tensor-based traffic flow prediction approach is proposed,which can achieve efficient and accurate short-term traffic flow prediction with continuous traffic flow data in a limited period of time.In the proposed approach,first,a tensor-based traffic flow model is proposed to establish the multi-dimensional relationships for traffic flow values in continuous time intervals.Then,a CANDECOMP/PARAFAC decomposition based algorithm is employed to complete the missing values in the constructed tensor.Finally,the completed tensor can be directly used to achieve efficient and accurate traffic flow prediction.The experiments on the real dataset indicate that the proposed approach outperforms many current approaches on traffic flow prediction with limited amount of traffic flow data.展开更多
Fault diagnosis plays the increasingly vital role to guarantee the machine reliability in the industrial enterprise.Among all the solutions,deep learning(DL)methods have achieved more popularity for their feature extr...Fault diagnosis plays the increasingly vital role to guarantee the machine reliability in the industrial enterprise.Among all the solutions,deep learning(DL)methods have achieved more popularity for their feature extraction ability from the raw historical data.However,the performance of DL relies on the huge amount of labeled data,as it is costly to obtain in the real world as the labeling process for data is usually tagged by hand.To obtain the good performance with limited labeled data,this research proposes a threshold-control generative adversarial network(TCGAN)method.Firstly,the 1D vibration signals are processed to be converted into 2D images,which are used as the input of TCGAN.Secondly,TCGAN would generate pseudo data which have the similar distribution with the limited labeled data.With pseudo data generation,the training dataset can be enlarged and the increase on the labeled data could further promote the performance of TCGAN on fault diagnosis.Thirdly,to mitigate the instability of the generated data,a threshold-control is presented to adjust the relationship between discriminator and generator dynamically and automatically.The proposed TCGAN is validated on the datasets from Case Western Reserve University and Self-Priming Centrifugal Pump.The prediction accuracies with limited labeled data have reached to 99.96%and 99.898%,which are even better than other methods tested under the whole labeled datasets.展开更多
In Bosnia and Herzegovina(BiH),the number of weather stations(WS)that are monitoring all climatic parameters required for FAO-56 Penman-Monteith(FAO-PM)equation is limited.In fact,it is of great need and importance to...In Bosnia and Herzegovina(BiH),the number of weather stations(WS)that are monitoring all climatic parameters required for FAO-56 Penman-Monteith(FAO-PM)equation is limited.In fact,it is of great need and importance to achieve the possibility of calculating reference evapotranspiration(ET_(0))for every WS in BiH(around 150),regardless of the number of climate parameters which they collect.Solving this problem is possible by using alternative equations that require less climatological data for reliable estimation of daily and monthly ET0.The main objective of this study was to validate and determine,compared to the FAO-PM method,a suitable and reliable alternative ET0 equations that are requiring less input data and have a simple calculation procedure,with a special focus on Thornthwaite and Turc as methods previously often used in BiH.To fulfill this objective,12 alternative ET0 calculation methods and 21 locally adjusted versions of same equations were validated against FAO-PM ET0 method.Daily climatic data,recorded at sixteen WS,including mean maximum and minimum air temperature(°C),precipitation(mm),minimum and maximum relative humidity(%),wind speed(m s^(−1))and sunshine hours(h)for the period 1961–2015(55 years)were collected and averaged over each month.Several types of statistical indicators:the determination coefficient(R^(2)),mean bias error(MBE),the variance of the distribution of differences(sd^(2)),the root mean square difference(RMSD)and the mean absolute error(MAE)were used to assess alternative ET_(0) equation performance.The results,confirmed by various statistical indicators,shows that the most suitable and reliable alternative equation for monthly ET0 calculation in BiH is the locally adjusted Trajkovic method.Adjusted Hargreaves-Samani method was the second best performing method.The two most frequently used ET_(0) calculation methods in BiH until now,Thornthwaite and Turc,were ranked low.展开更多
The abundance of spectral information provided by hyperspectral imagery offers great benefits for many applications.However,processing such high-dimensional data volumes is a challenge because there may be redundant b...The abundance of spectral information provided by hyperspectral imagery offers great benefits for many applications.However,processing such high-dimensional data volumes is a challenge because there may be redundant bands owing to the high interband correlation.This study aimed to reduce the possibility of“dimension disaster”in the classification of coastal wetlands using hyperspectral images with limited training samples.The study developed a hyperspectral classification algorithm for coastal wetlands using a combination of subspace partitioning and infinite probabilistic latent graph ranking in a random patch network(the SSP-IPLGR-RPnet model).The SSP-IPLGR-RPnet approach applied SSP techniques and an IPLGR algorithm to reduce the dimensions of hyperspectral data.The RPnet model overcame the problem of dimension disaster caused by the mismatch between the dimensionality of hyperspectral bands and the small number of training samples.The results showed that the proposed algorithm had a better classification performance and was more robust with limited training data compared with that of several other state-of-the-art methods.The overall accuracy was nearly 4%higher on average compared with that of multi-kernel SVM and RF algorithms.Compared with the EMAP algorithm,MSTV algorithm,ERF algorithm,ERW algorithm,RMKL algorithm and 3D-CNN algorithm,the SSP-IPLGR-RPnet algorithm provided a better classification performance in a shorter time.展开更多
基金Project(2017G006-N)supported by the Project of Science and Technology Research and Development Program of China Railway Corporation。
文摘Cable-stayed bridges have been widely used in high-speed railway infrastructure.The accurate determination of cable’s representative temperatures is vital during the intricate processes of design,construction,and maintenance of cable-stayed bridges.However,the representative temperatures of stayed cables are not specified in the existing design codes.To address this issue,this study investigates the distribution of the cable temperature and determinates its representative temperature.First,an experimental investigation,spanning over a period of one year,was carried out near the bridge site to obtain the temperature data.According to the statistical analysis of the measured data,it reveals that the temperature distribution is generally uniform along the cable cross-section without significant temperature gradient.Then,based on the limited data,the Monte Carlo,the gradient boosted regression trees(GBRT),and univariate linear regression(ULR)methods are employed to predict the cable’s representative temperature throughout the service life.These methods effectively overcome the limitations of insufficient monitoring data and accurately predict the representative temperature of the cables.However,each method has its own advantages and limitations in terms of applicability and accuracy.A comprehensive evaluation of the performance of these methods is conducted,and practical recommendations are provided for their application.The proposed methods and representative temperatures provide a good basis for the operation and maintenance of in-service long-span cable-stayed bridges.
基金Project(51606225) supported by the National Natural Science Foundation of ChinaProject(2016JJ2144) supported by Hunan Provincial Natural Science Foundation of ChinaProject(502221703) supported by Graduate Independent Explorative Innovation Foundation of Central South University,China
文摘An accurate long-term energy demand forecasting is essential for energy planning and policy making. However, due to the immature energy data collecting and statistical methods, the available data are usually limited in many regions. In this paper, on the basis of comprehensive literature review, we proposed a hybrid model based on the long-range alternative energy planning (LEAP) model to improve the accuracy of energy demand forecasting in these regions. By taking Hunan province, China as a typical case, the proposed hybrid model was applied to estimating the possible future energy demand and energy-saving potentials in different sectors. The structure of LEAP model was estimated by Sankey energy flow, and Leslie matrix and autoregressive integrated moving average (ARIMA) models were used to predict the population, industrial structure and transportation turnover, respectively. Monte-Carlo method was employed to evaluate the uncertainty of forecasted results. The results showed that the hybrid model combined with scenario analysis provided a relatively accurate forecast for the long-term energy demand in regions with limited statistical data, and the average standard error of probabilistic distribution in 2030 energy demand was as low as 0.15. The prediction results could provide supportive references to identify energy-saving potentials and energy development pathways.
文摘The Swiss Agency for Development and Cooperation (SDC) has funded the Rural Water and Sanitation Support Programme (RWSSP) that has increased the access to public water supply throughout Europe’s youngest state—Kosovo—in the past ten years. The Programme, implemented by Dorsch International Consultants GmbH and Community Development Initiatives has, among other activities, implemented groundwater protection methods. Nevertheless, groundwater protection remains a challenge in Kosovo. The water law describes that water source protection is similar to German rules, yet modelling-based planning of water source protection zones remains challenging. In the present study, the development of the hydrogeological and the mathematical groundwater model for the technical delineation of the well head protection area for the Ferizaj well fields under limited data availability is described in detail. The study shows that even when not all data are available, it is possible and necessary to use mathematical groundwater models to delineate well head protection areas.
文摘Traditional global sensitivity analysis(GSA)neglects the epistemic uncertainties associated with the probabilistic characteristics(i.e.type of distribution type and its parameters)of input rock properties emanating due to the small size of datasets while mapping the relative importance of properties to the model response.This paper proposes an augmented Bayesian multi-model inference(BMMI)coupled with GSA methodology(BMMI-GSA)to address this issue by estimating the imprecision in the momentindependent sensitivity indices of rock structures arising from the small size of input data.The methodology employs BMMI to quantify the epistemic uncertainties associated with model type and parameters of input properties.The estimated uncertainties are propagated in estimating imprecision in moment-independent Borgonovo’s indices by employing a reweighting approach on candidate probabilistic models.The proposed methodology is showcased for a rock slope prone to stress-controlled failure in the Himalayan region of India.The proposed methodology was superior to the conventional GSA(neglects all epistemic uncertainties)and Bayesian coupled GSA(B-GSA)(neglects model uncertainty)due to its capability to incorporate the uncertainties in both model type and parameters of properties.Imprecise Borgonovo’s indices estimated via proposed methodology provide the confidence intervals of the sensitivity indices instead of their fixed-point estimates,which makes the user more informed in the data collection efforts.Analyses performed with the varying sample sizes suggested that the uncertainties in sensitivity indices reduce significantly with the increasing sample sizes.The accurate importance ranking of properties was only possible via samples of large sizes.Further,the impact of the prior knowledge in terms of prior ranges and distributions was significant;hence,any related assumption should be made carefully.
基金The Fundamental Research Funds for the Central Universities under contract Nos 201562030 and 201612004
文摘Natural mortality rate(M) is one of the essential parameters in fishery stock assessment, however, the estimation of M is commonly rough and the changes of M due to natural and anthropogenic impacts have long been ignored.The simplification of M estimation and the influence of M variations on the assessment and management of fisheries stocks have been less well understood. This study evaluated the impacts of the changes in natural mortality of Spanish mackerel(Scomberomorus niphonius) on their management strategies with data-limited methods. We tested the performances of a variety of management procedures(MPs) with the variations of M in mackerel stock using diverse estimation methods. The results of management strategies evaluation showed that four management procedures DCAC, SPMSY, cur E75 and minlen Lopt1 were more robust to the changes of M than others; however, their performance were substantially influenced by the significant decrease of M from the 1970s to 2017. Relative population biomass(measure as the probability of B〉0.5 BMSY) increased significantly with the decrease of M, whereas the possibility of overfishing showed remarkable variations across MPs. The decrease of M had minor effects on the long-term yield of cur E75 and minlen Lopt1, and reduced the fluctuation of yield(measure as the probability of AAVY〈15%) for DCAC, SPMSY. In general, the different methods for M estimation showed minor effects on the performance of MPs, whereas the temporal changes of M showed substantial influences. Considering the fishery status of Spanish mackerel in China, we recommended that cur E75 has the best trade-off between fishery resources exploitation and conservation, and we also proposed the potentials and issues in their implementations.
文摘The manufacturing of composite structures is a highly complex task with inevitable risks, particularly associated with aleatoric and epistemic uncertainty of both the materials and processes, as well as the need for <i>in-situ</i> decision-making to mitigate defects during manufacturing. In the context of aerospace composites production in particular, there is a heightened impetus to address and reduce this risk. Current qualification and substantiation frameworks within the aerospace industry define tractable methods for risk reduction. In parallel, Industry 4.0 is an emerging set of technologies and tools that can enable better decision-making towards risk reduction, supported by data-driven models. It offers new paradigms for manufacturers, by virtue of enabling <i>in-situ</i> decisions for optimizing the process as a dynamic system. However, the static nature of current (pre-Industry 4.0) best-practice frameworks may be viewed as at odds with this emerging novel approach. In addition, many of the predictive tools leveraged in an Industry 4.0 system are black-box in nature, which presents other concerns of tractability, interpretability and ultimately risk. This article presents a perspective on the current state-of-the-art in the aerospace composites industry focusing on risk reduction in the autoclave processing, as an example system, while reviewing current trends and needs towards a Composites 4.0 future.
基金funded by the National Key R&D Program of China(2021YFC3320302).
文摘In recent years,deep learning algorithms have been popular in recognizing targets in synthetic aperture radar(SAR)images.However,due to the problem of overfitting,the performance of these models tends to worsen when just a small number of training data are available.In order to solve the problems of overfitting and an unsatisfied performance of the network model in the small sample remote sensing image target recognition,in this paper,we uses a deep residual network to autonomously acquire image features and proposes the Deep Feature Bayesian Classifier model(RBnet)for SAR image target recognition.In the RBnet,a Bayesian classifier is used to improve the effect of SAR image target recognition and improve the accuracy when the training data is limited.The experimental results on MSTAR dataset show that the RBnet can fully exploit effective information in limited samples and recognize the target of the SAR images more accurately.Compared with other state-of-the-art methods,our method offers significant recognition accuracy improvements under limited training data.Noted that theRBnet is moderately difficult to implement and has the value of popularization and application in engineering application scenarios in the field of small-sample remote sensing target recognition and recognition.
基金This research is partially supported by NIH grant R15EB024283.
文摘Sparse-view tomography has many applications such as in low-dose computed tomography(CT).Using undersampled data,a perfect image is not expected.The goal of this paper is to obtain a tomographic image that is better than the naïve filtered backprojection(FBP)reconstruction that uses linear interpolation to complete the measurements.This paper proposes a method to estimate the un-measured projections by displacement function interpolation.Displacement function estimation is a non-linear procedure and the linear interpolation is performed on the displacement function(instead of,on the sinogram itself).As a result,the estimated measurements are not the linear transformation of the measured data.The proposed method is compared with the linear interpolation methods,and the proposed method shows superior performance.
基金supported by Aeronautical Science Foundation of China (No. 20100251006)Technological Foundation Project of China (No. J132012C001)
文摘For random vibration of airborne platform, the accurate evaluation is a key indicator to ensure normal operation of airborne equipment in flight. However, only limited power spectral density(PSD) data can be obtained at the stage of flight test. Thus, those conventional evaluation methods cannot be employed when the distribution characteristics and priori information are unknown. In this paper, the fuzzy norm method(FNM) is proposed which combines the advantages of fuzzy theory and norm theory. The proposed method can deeply dig system information from limited data, which probability distribution is not taken into account. Firstly, the FNM is employed to evaluate variable interval and expanded uncertainty from limited PSD data, and the performance of FNM is demonstrated by confidence level, reliability and computing accuracy of expanded uncertainty. In addition, the optimal fuzzy parameters are discussed to meet the requirements of aviation standards and metrological practice. Finally, computer simulation is used to prove the adaptability of FNM. Compared with statistical methods, FNM has superiority for evaluating expanded uncertainty from limited data. The results show that the reliability of calculation and evaluation is superior to 95%.
基金supported by the National Natural Science Foundation of China(Grant Nos.52206053,52130603)。
文摘Predicting the external flow field with limited data or limited measurements has attracted long-time interests of researchers in many industrial applications.Physics informed neural network(PINN)provides a seamless framework for combining the measured data with the deep neural network,making the neural network capable of executing certain physical constraints.Unlike the data-driven model to learn the end-to-end mapping between the sensor data and high-dimensional flow field,PINN need no prior high-dimensional field as the training dataset and can construct the mapping from sensor data to high dimensional flow field directly.However,the extrapolation of the flow field in the temporal direction is limited due to the lack of training data.Therefore,we apply the long short-term memory(LSTM)network and physics-informed neural network(PINN)to predict the flow field and hydrodynamic force in the future temporal domain with limited data measured in the spatial domain.The physical constraints(conservation laws of fluid flow,e.g.,Navier-Stokes equations)are embedded into the loss function to enforce the trained neural network to capture some latent physical relation between the output fluid parameters and input tempo-spatial parameters.The sparsely measured points in this work are obtained from computational fluid dynamics(CFD)solver based on the local radial basis function(RBF)method.Different numbers of spatial measured points(4–35)downstream the cylinder are trained with/without the prior knowledge of Reynolds number to validate the availability and accuracy of the proposed approach.More practical applications of flow field prediction can compute the drag and lift force along with the cylinder,while different geometry shapes are taken into account.By comparing the flow field reconstruction and force prediction with CFD results,the proposed approach produces a comparable level of accuracy while significantly fewer data in the spatial domain is needed.The numerical results demonstrate that the proposed approach with a specific deep neural network configuration is of great potential for emerging cases where the measured data are often limited.
基金supported by the Director,Office of Science,Office of High Energy Physics,of the U.S.Department of Energy under Contract No.DE-AC02-05CH11231by the National Natural Science Foundation of China under Grant No.11475097
文摘Coherent pulse stacking(CPS) is a new time-domain coherent addition technique that stacks several optical pulses into a single output pulse, enabling high pulse energy and high average power. A Z-domain model targeting the pulsed laser is assembled to describe the optical interference process. An algorithm, extracting the cavity phase and pulse phases from limited data, where only the pulse intensity is available, is developed to diagnose optical cavity resonators. We also implement the algorithm on the cascaded system of multiple optical cavities,achieving phase errors less than 1.0°(root mean square), which could ensure the stability of CPS.
基金the National Natural Science Foundation of China(No.51790512)the 111 Project(No.B17037)the National Key Laboratory Foundation,Industry-Academia-Research Collaboration Project of Aero Engine Corporation of China(No.HFZL2018CXY011-1)and MIIT。
文摘To investigate the influence of real leading-edge manufacturing error on aerodynamic performance of high subsonic compressor blades,a family of leading-edge manufacturing error data were obtained from measured compressor cascades.Considering the limited samples,the leadingedge angle and leading-edge radius distribution forms were evaluated by Shapiro-Wilk test and quantile–quantile plot.Their statistical characteristics provided can be introduced to later related researches.The parameterization design method B-spline and Bezier are adopted to create geometry models with manufacturing error based on leading-edge angle and leading-edge radius.The influence of real manufacturing error is quantified and analyzed by self-developed non-intrusive polynomial chaos and Sobol’indices.The mechanism of leading-edge manufacturing error on aerodynamic performance is discussed.The results show that the total pressure loss coefficient is sensitive to the leading-edge manufacturing error compared with the static pressure ratio,especially at high incidence.Specifically,manufacturing error of the leading edge will influence the local flow acceleration and subsequently cause fluctuation of the downstream flow.The aerodynamic performance is sensitive to the manufacturing error of leading-edge radius at the design and negative incidences,while it is sensitive to the manufacturing error of leading-edge angle under the operation conditions with high incidences.
基金supported by the Beijing Natural Science Foundation under Nos.4192004 and 4212016the National Natural Science Foundation of China under Grant Nos.61703013 and 62072016+3 种基金the Project of Beijing Municipal Education Commission under Grant Nos.KM201810005024 and KM201810005023Foundation from School of Computer Science and Technology,Beijing University of Technology under Grants No.2020JSJKY005the International Research Cooperation Seed Fund of Beijing University of Technology under Grant No.2021B29National Engineering Laboratory for Industrial Big-data Application Technology.
文摘Traffic flow prediction plays an important role in intelligent transportation applications,such as traffic control,navigation,path planning,etc.,which are closely related to people's daily life.In the last twenty years,many traffic flow prediction approaches have been proposed.However,some of these approaches use the regression based mechanisms,which cannot achieve accurate short-term traffic flow predication.While,other approaches use the neural network based mechanisms,which cannot work well with limited amount of training data.To this end,a light weight tensor-based traffic flow prediction approach is proposed,which can achieve efficient and accurate short-term traffic flow prediction with continuous traffic flow data in a limited period of time.In the proposed approach,first,a tensor-based traffic flow model is proposed to establish the multi-dimensional relationships for traffic flow values in continuous time intervals.Then,a CANDECOMP/PARAFAC decomposition based algorithm is employed to complete the missing values in the constructed tensor.Finally,the completed tensor can be directly used to achieve efficient and accurate traffic flow prediction.The experiments on the real dataset indicate that the proposed approach outperforms many current approaches on traffic flow prediction with limited amount of traffic flow data.
基金supported in part by the National Key R&D Program of China(No.2018AAA0101700)the National Natural Science Foundation of China(No.51805192)the State Key Laboratory of Digital Manufacturing Equipment and Technology of Huazhong University of Science and Technology(No.DMETKF2020029).
文摘Fault diagnosis plays the increasingly vital role to guarantee the machine reliability in the industrial enterprise.Among all the solutions,deep learning(DL)methods have achieved more popularity for their feature extraction ability from the raw historical data.However,the performance of DL relies on the huge amount of labeled data,as it is costly to obtain in the real world as the labeling process for data is usually tagged by hand.To obtain the good performance with limited labeled data,this research proposes a threshold-control generative adversarial network(TCGAN)method.Firstly,the 1D vibration signals are processed to be converted into 2D images,which are used as the input of TCGAN.Secondly,TCGAN would generate pseudo data which have the similar distribution with the limited labeled data.With pseudo data generation,the training dataset can be enlarged and the increase on the labeled data could further promote the performance of TCGAN on fault diagnosis.Thirdly,to mitigate the instability of the generated data,a threshold-control is presented to adjust the relationship between discriminator and generator dynamically and automatically.The proposed TCGAN is validated on the datasets from Case Western Reserve University and Self-Priming Centrifugal Pump.The prediction accuracies with limited labeled data have reached to 99.96%and 99.898%,which are even better than other methods tested under the whole labeled datasets.
文摘In Bosnia and Herzegovina(BiH),the number of weather stations(WS)that are monitoring all climatic parameters required for FAO-56 Penman-Monteith(FAO-PM)equation is limited.In fact,it is of great need and importance to achieve the possibility of calculating reference evapotranspiration(ET_(0))for every WS in BiH(around 150),regardless of the number of climate parameters which they collect.Solving this problem is possible by using alternative equations that require less climatological data for reliable estimation of daily and monthly ET0.The main objective of this study was to validate and determine,compared to the FAO-PM method,a suitable and reliable alternative ET0 equations that are requiring less input data and have a simple calculation procedure,with a special focus on Thornthwaite and Turc as methods previously often used in BiH.To fulfill this objective,12 alternative ET0 calculation methods and 21 locally adjusted versions of same equations were validated against FAO-PM ET0 method.Daily climatic data,recorded at sixteen WS,including mean maximum and minimum air temperature(°C),precipitation(mm),minimum and maximum relative humidity(%),wind speed(m s^(−1))and sunshine hours(h)for the period 1961–2015(55 years)were collected and averaged over each month.Several types of statistical indicators:the determination coefficient(R^(2)),mean bias error(MBE),the variance of the distribution of differences(sd^(2)),the root mean square difference(RMSD)and the mean absolute error(MAE)were used to assess alternative ET_(0) equation performance.The results,confirmed by various statistical indicators,shows that the most suitable and reliable alternative equation for monthly ET0 calculation in BiH is the locally adjusted Trajkovic method.Adjusted Hargreaves-Samani method was the second best performing method.The two most frequently used ET_(0) calculation methods in BiH until now,Thornthwaite and Turc,were ranked low.
基金supported by the National Natural Science Foundation of China (Grant Nos. 42106179 and 42076189)the Pilot Project of Monitoring Evaluation of Spartina Alterniflora in Shandong Province in 2021 “Remote Sensing Monitoring of Spartina Alterniflora”
文摘The abundance of spectral information provided by hyperspectral imagery offers great benefits for many applications.However,processing such high-dimensional data volumes is a challenge because there may be redundant bands owing to the high interband correlation.This study aimed to reduce the possibility of“dimension disaster”in the classification of coastal wetlands using hyperspectral images with limited training samples.The study developed a hyperspectral classification algorithm for coastal wetlands using a combination of subspace partitioning and infinite probabilistic latent graph ranking in a random patch network(the SSP-IPLGR-RPnet model).The SSP-IPLGR-RPnet approach applied SSP techniques and an IPLGR algorithm to reduce the dimensions of hyperspectral data.The RPnet model overcame the problem of dimension disaster caused by the mismatch between the dimensionality of hyperspectral bands and the small number of training samples.The results showed that the proposed algorithm had a better classification performance and was more robust with limited training data compared with that of several other state-of-the-art methods.The overall accuracy was nearly 4%higher on average compared with that of multi-kernel SVM and RF algorithms.Compared with the EMAP algorithm,MSTV algorithm,ERF algorithm,ERW algorithm,RMKL algorithm and 3D-CNN algorithm,the SSP-IPLGR-RPnet algorithm provided a better classification performance in a shorter time.