Prunus serotina and Robinia pseudoacacia are the most widespread invasive trees in Central Europe.In addition,according to climate models,decreased growth of many economically and ecologically important native trees w...Prunus serotina and Robinia pseudoacacia are the most widespread invasive trees in Central Europe.In addition,according to climate models,decreased growth of many economically and ecologically important native trees will likely be observed in the future.We aimed to assess the impact of these two neophytes,which differ in the biomass range and nitrogen-fixing abilities observed in Central European conditions,on the relative aboveground biomass increments of native oaks Qucrcus robur and Q.petraea and Scots pine Pinus sylvestris.We aimed to increase our understanding of the relationship between facilitation and competition between woody alien species and overstory native trees.We established 72 circular plots(0.05 ha)in two different forest habitat types and stands varying in age in western Poland.We chose plots with different abundances of the studied neophytes to determine how effects scaled along the quantitative invasion gradient.Furthermore,we collected growth cores of the studied native species,and we calculated aboveground biomass increments at the tree and stand levels.Then,we used generalized linear mixed-effects models to assess the impact of invasive species abundances on relative aboveground biomass increments of native tree species.We did not find a biologically or statistically significant impact of invasive R.pseudoacacia or P.serotina on the relative aboveground,biomass increments of native oaks and pines along the quantitative gradient of invader biomass or on the proportion of total stand biomass accounted for by invaders.The neophytes did not act as native tree growth stimulators but also did not compete with them for resources,which would escalate the negative impact of climate change on pines and oaks.The neophytes should not significantly modify the carbon sequestration capacity of the native species.Our work combines elements of the per capita effect of invasion with research on mixed forest management.展开更多
Time-Sensitive Network(TSN)with deterministic transmission capability is increasingly used in many emerging fields.It mainly guarantees the Quality of Service(QoS)of applications with strict requirements on time and s...Time-Sensitive Network(TSN)with deterministic transmission capability is increasingly used in many emerging fields.It mainly guarantees the Quality of Service(QoS)of applications with strict requirements on time and security.One of the core features of TSN is traffic scheduling with bounded low delay in the network.However,traffic scheduling schemes in TSN are usually synthesized offline and lack dynamism.To implement incremental scheduling of newly arrived traffic in TSN,we propose a Dynamic Response Incremental Scheduling(DR-IS)method for time-sensitive traffic and deploy it on a software-defined time-sensitive network architecture.Under the premise of meeting the traffic scheduling requirements,we adopt two modes,traffic shift and traffic exchange,to dynamically adjust the time slot injection position of the traffic in the original scheme,and determine the sending offset time of the new timesensitive traffic to minimize the global traffic transmission jitter.The evaluation results show that DRIS method can effectively control the large increase of traffic transmission jitter in incremental scheduling without affecting the transmission delay,thus realizing the dynamic incremental scheduling of time-sensitive traffic in TSN.展开更多
Currently,distributed routing protocols are constrained by offering a single path between any pair of nodes,thereby limiting the potential throughput and overall network performance.This approach not only restricts th...Currently,distributed routing protocols are constrained by offering a single path between any pair of nodes,thereby limiting the potential throughput and overall network performance.This approach not only restricts the flow of data but also makes the network susceptible to failures in case the primary path is disrupted.In contrast,routing protocols that leverage multiple paths within the network offer a more resilient and efficient solution.Multipath routing,as a fundamental concept,surpasses the limitations of traditional shortest path first protocols.It not only redirects traffic to unused resources,effectively mitigating network congestion,but also ensures load balancing across the network.This optimization significantly improves network utilization and boosts the overall performance,making it a widely recognized efficient method for enhancing network reliability.To further strengthen network resilience against failures,we introduce a routing scheme known as Multiple Nodes with at least Two Choices(MNTC).This innovative approach aims to significantly enhance network availability by providing each node with at least two routing choices.By doing so,it not only reduces the dependency on a single path but also creates redundant paths that can be utilized in case of failures,thereby enhancing the overall resilience of the network.To ensure the optimal placement of nodes,we propose three incremental deployment algorithms.These algorithms carefully select the most suitable set of nodes for deployment,taking into account various factors such as node connectivity,traffic patterns,and network topology.By deployingMNTCon a carefully chosen set of nodes,we can significantly enhance network reliability without the need for a complete overhaul of the existing infrastructure.We have conducted extensive evaluations of MNTC in diverse topological spaces,demonstrating its effectiveness in maintaining high network availability with minimal path stretch.The results are impressive,showing that even when implemented on just 60%of nodes,our incremental deployment method significantly boosts network availability.This underscores the potential of MNTC in enhancing network resilience and performance,making it a viable solution for modern networks facing increasing demands and complexities.The algorithms OSPF,TBFH,DC and LFC perform fast rerouting based on strict conditions,while MNTC is not restricted by these conditions.In five real network topologies,the average network availability ofMNTCis improved by 14.68%,6.28%,4.76%and 2.84%,respectively,compared with OSPF,TBFH,DC and LFC.展开更多
Multispecies forests have received increased scientific attention,driven by the hypothesis that biodiversity improves ecological resilience.However,a greater species diversity presents challenges for forest management...Multispecies forests have received increased scientific attention,driven by the hypothesis that biodiversity improves ecological resilience.However,a greater species diversity presents challenges for forest management and research.Our study aims to develop basal area growth models for tree species cohorts.The analysis is based on a dataset of 423 permanent plots(2,500 m^(2))located in temperate forests in Durango,Mexico.First,we define tree species cohorts based on individual and neighborhood-based variables using a combination of principal component and cluster analyses.Then,we estimate the basal area increment of each cohort through the generalized additive model to describe the effect of tree size,competition,stand density and site quality.The principal component and cluster analyses assign a total of 37 tree species to eight cohorts that differed primarily with regard to the distribution of tree size and vertical position within the community.The generalized additive models provide satisfactory estimates of tree growth for the species cohorts,explaining between 19 and 53 percent of the total variation of basal area increment,and highlight the following results:i)most cohorts show a"rise-and-fall"effect of tree size on tree growth;ii)surprisingly,the competition index"basal area of larger trees"had showed a positive effect in four of the eight cohorts;iii)stand density had a negative effect on basal area increment,though the effect was minor in medium-and high-density stands,and iv)basal area growth was positively correlated with site quality except for an oak cohort.The developed species cohorts and growth models provide insight into their particular ecological features and growth patterns that may support the development of sustainable management strategies for temperate multispecies forests.展开更多
Humans are experiencing the inclusion of artificial agents in their lives,such as unmanned vehicles,service robots,voice assistants,and intelligent medical care.If the artificial agents cannot align with social values...Humans are experiencing the inclusion of artificial agents in their lives,such as unmanned vehicles,service robots,voice assistants,and intelligent medical care.If the artificial agents cannot align with social values or make ethical decisions,they may not meet the expectations of humans.Traditionally,an ethical decision-making framework is constructed by rule-based or statistical approaches.In this paper,we propose an ethical decision-making framework based on incremental ILP(Inductive Logic Programming),which can overcome the brittleness of rule-based approaches and little interpretability of statistical approaches.As the current incremental ILP makes it difficult to solve conflicts,we propose a novel ethical decision-making framework considering conflicts in this paper,which adopts our proposed incremental ILP system.The framework consists of two processes:the learning process and the deduction process.The first process records bottom clauses with their score functions and learns rules guided by the entailment and the score function.The second process obtains an ethical decision based on the rules.In an ethical scenario about chatbots for teenagers’mental health,we verify that our framework can learn ethical rules and make ethical decisions.Besides,we extract incremental ILP from the framework and compare it with the state-of-the-art ILP systems based on ASP(Answer Set Programming)focusing on conflict resolution.The results of comparisons show that our proposed system can generate better-quality rules than most other systems.展开更多
The visions of Industry 4.0 and 5.0 have reinforced the industrial environment.They have also made artificial intelligence incorporated as a major facilitator.Diagnosing machine faults has become a solid foundation fo...The visions of Industry 4.0 and 5.0 have reinforced the industrial environment.They have also made artificial intelligence incorporated as a major facilitator.Diagnosing machine faults has become a solid foundation for automatically recognizing machine failure,and thus timely maintenance can ensure safe operations.Transfer learning is a promising solution that can enhance the machine fault diagnosis model by borrowing pre-trained knowledge from the source model and applying it to the target model,which typically involves two datasets.In response to the availability of multiple datasets,this paper proposes using selective and adaptive incremental transfer learning(SA-ITL),which fuses three algorithms,namely,the hybrid selective algorithm,the transferability enhancement algorithm,and the incremental transfer learning algorithm.It is a selective algorithm that enables selecting and ordering appropriate datasets for transfer learning and selecting useful knowledge to avoid negative transfer.The algorithm also adaptively adjusts the portion of training data to balance the learning rate and training time.The proposed algorithm is evaluated and analyzed using ten benchmark datasets.Compared with other algorithms from existing works,SA-ITL improves the accuracy of all datasets.Ablation studies present the accuracy enhancements of the SA-ITL,including the hybrid selective algorithm(1.22%-3.82%),transferability enhancement algorithm(1.91%-4.15%),and incremental transfer learning algorithm(0.605%-2.68%).These also show the benefits of enhancing the target model with heterogeneous image datasets that widen the range of domain selection between source and target domains.展开更多
We investigated the parametric optimization on incremental sheet forming of stainless steel using Grey Relational Analysis(GRA) coupled with Principal Component Analysis(PCA). AISI 316L stainless steel sheets were use...We investigated the parametric optimization on incremental sheet forming of stainless steel using Grey Relational Analysis(GRA) coupled with Principal Component Analysis(PCA). AISI 316L stainless steel sheets were used to develop double wall angle pyramid with aid of tungsten carbide tool. GRA coupled with PCA was used to plan the experiment conditions. Control factors such as Tool Diameter(TD), Step Depth(SD), Bottom Wall Angle(BWA), Feed Rate(FR) and Spindle Speed(SS) on Top Wall Angle(TWA) and Top Wall Angle Surface Roughness(TWASR) have been studied. Wall angle increases with increasing tool diameter due to large contact area between tool and workpiece. As the step depth, feed rate and spindle speed increase,TWASR decreases with increasing tool diameter. As the step depth increasing, the hydrostatic stress is raised causing severe cracks in the deformed surface. Hence it was concluded that the proposed hybrid method was suitable for optimizing the factors and response.展开更多
Hyperspectral images typically have high spectral resolution but low spatial resolution,which impacts the reliability and accuracy of subsequent applications,for example,remote sensingclassification and mineral identi...Hyperspectral images typically have high spectral resolution but low spatial resolution,which impacts the reliability and accuracy of subsequent applications,for example,remote sensingclassification and mineral identification.But in traditional methods via deep convolution neural net-works,indiscriminately extracting and fusing spectral and spatial features makes it challenging toutilize the differentiated information across adjacent spectral channels.Thus,we proposed a multi-branch interleaved iterative upsampling hyperspectral image super-resolution reconstruction net-work(MIIUSR)to address the above problems.We reinforce spatial feature extraction by integrat-ing detailed features from different receptive fields across adjacent channels.Furthermore,we pro-pose an interleaved iterative upsampling process during the reconstruction stage,which progres-sively fuses incremental information among adjacent frequency bands.Additionally,we add twoparallel three dimensional(3D)feature extraction branches to the backbone network to extractspectral and spatial features of varying granularity.We further enhance the backbone network’sconstruction results by leveraging the difference between two dimensional(2D)channel-groupingspatial features and 3D multi-granularity features.The results obtained by applying the proposednetwork model to the CAVE test set show that,at a scaling factor of×4,the peak signal to noiseratio,spectral angle mapping,and structural similarity are 37.310 dB,3.525 and 0.9438,respec-tively.Besides,extensive experiments conducted on the Harvard and Foster datasets demonstratethe superior potential of the proposed model in hyperspectral super-resolution reconstruction.展开更多
To improve the prediction accuracy of chaotic time series and reconstruct a more reasonable phase space structure of the prediction network,we propose a convolutional neural network-long short-term memory(CNN-LSTM)pre...To improve the prediction accuracy of chaotic time series and reconstruct a more reasonable phase space structure of the prediction network,we propose a convolutional neural network-long short-term memory(CNN-LSTM)prediction model based on the incremental attention mechanism.Firstly,a traversal search is conducted through the traversal layer for finite parameters in the phase space.Then,an incremental attention layer is utilized for parameter judgment based on the dimension weight criteria(DWC).The phase space parameters that best meet DWC are selected and fed into the input layer.Finally,the constructed CNN-LSTM network extracts spatio-temporal features and provides the final prediction results.The model is verified using Logistic,Lorenz,and sunspot chaotic time series,and the performance is compared from the two dimensions of prediction accuracy and network phase space structure.Additionally,the CNN-LSTM network based on incremental attention is compared with long short-term memory(LSTM),convolutional neural network(CNN),recurrent neural network(RNN),and support vector regression(SVR)for prediction accuracy.The experiment results indicate that the proposed composite network model possesses enhanced capability in extracting temporal features and achieves higher prediction accuracy.Also,the algorithm to estimate the phase space parameter is compared with the traditional CAO,false nearest neighbor,and C-C,three typical methods for determining the chaotic phase space parameters.The experiments reveal that the phase space parameter estimation algorithm based on the incremental attention mechanism is superior in prediction accuracy compared with the traditional phase space reconstruction method in five networks,including CNN-LSTM,LSTM,CNN,RNN,and SVR.展开更多
Deep Convolution Neural Networks(DCNNs)can capture discriminative features from large datasets.However,how to incrementally learn new samples without forgetting old ones and recognize novel classes that arise in the d...Deep Convolution Neural Networks(DCNNs)can capture discriminative features from large datasets.However,how to incrementally learn new samples without forgetting old ones and recognize novel classes that arise in the dynamically changing world,e.g.,classifying newly discovered fish species,remains an open problem.We address an even more challenging and realistic setting of this problem where new class samples are insufficient,i.e.,Few-Shot Class-Incremental Learning(FSCIL).Current FSCIL methods augment the training data to alleviate the overfitting of novel classes.By contrast,we propose Filter Bank Networks(FBNs)that augment the learnable filters to capture fine-detailed features for adapting to future new classes.In the forward pass,FBNs augment each convolutional filter to a virtual filter bank containing the canonical one,i.e.,itself,and multiple transformed versions.During back-propagation,FBNs explicitly stimulate fine-detailed features to emerge and collectively align all gradients of each filter bank to learn the canonical one.FBNs capture pattern variants that do not yet exist in the pretraining session,thus making it easy to incorporate new classes in the incremental learning phase.Moreover,FBNs introduce model-level prior knowledge to efficiently utilize the limited few-shot data.Extensive experiments on MNIST,CIFAR100,CUB200,andMini-ImageNet datasets show that FBNs consistently outperformthe baseline by a significantmargin,reporting new state-of-the-art FSCIL results.In addition,we contribute a challenging FSCIL benchmark,Fishshot1K,which contains 8261 underwater images covering 1000 ocean fish species.The code is included in the supplementary materials.展开更多
In the traditional incremental analysis update(IAU)process,all analysis increments are treated as constant forcing in a model’s prognostic equations over a certain time window.This approach effectively reduces high-f...In the traditional incremental analysis update(IAU)process,all analysis increments are treated as constant forcing in a model’s prognostic equations over a certain time window.This approach effectively reduces high-frequency oscillations introduced by data assimilation.However,as different scales of increments have unique evolutionary speeds and life histories in a numerical model,the traditional IAU scheme cannot fully meet the requirements of short-term forecasting for the damping of high-frequency noise and may even cause systematic drifts.Therefore,a multi-scale IAU scheme is proposed in this paper.Analysis increments were divided into different scale parts using a spatial filtering technique.For each scale increment,the optimal relaxation time in the IAU scheme was determined by the skill of the forecasting results.Finally,different scales of analysis increments were added to the model integration during their optimal relaxation time.The multi-scale IAU scheme can effectively reduce the noise and further improve the balance between large-scale and small-scale increments in the model initialization stage.To evaluate its performance,several numerical experiments were conducted to simulate the path and intensity of Typhoon Mangkhut(2018)and showed that:(1)the multi-scale IAU scheme had an obvious effect on noise control at the initial stage of data assimilation;(2)the optimal relaxation time for large-scale and small-scale increments was estimated as 6 h and 3 h,respectively;(3)the forecast performance of the multi-scale IAU scheme in the prediction of Typhoon Mangkhut(2018)was better than that of the traditional IAU scheme.The results demonstrate the superiority of the multi-scale IAU scheme.展开更多
Background With the development of information technology,there is a significant increase in the number of network traffic logs mixed with various types of cyberattacks.Traditional intrusion detection systems(IDSs)are...Background With the development of information technology,there is a significant increase in the number of network traffic logs mixed with various types of cyberattacks.Traditional intrusion detection systems(IDSs)are limited in detecting new inconstant patterns and identifying malicious traffic traces in real time.Therefore,there is an urgent need to implement more effective intrusion detection technologies to protect computer security.Methods In this study,we designed a hybrid IDS by combining our incremental learning model(KANSOINN)and active learning to learn new log patterns and detect various network anomalies in real time.Conclusions Experimental results on the NSLKDD dataset showed that KAN-SOINN can be continuously improved and effectively detect malicious logs.Meanwhile,comparative experiments proved that using a hybrid query strategy in active learning can improve the model learning efficiency.展开更多
Attribute reduction,also known as feature selection,for decision information systems is one of the most pivotal issues in machine learning and data mining.Approaches based on the rough set theory and some extensions w...Attribute reduction,also known as feature selection,for decision information systems is one of the most pivotal issues in machine learning and data mining.Approaches based on the rough set theory and some extensions were proved to be efficient for dealing with the problemof attribute reduction.Unfortunately,the intuitionistic fuzzy sets based methods have not received much interest,while these methods are well-known as a very powerful approach to noisy decision tables,i.e.,data tables with the low initial classification accuracy.Therefore,this paper provides a novel incremental attribute reductionmethod to dealmore effectivelywith noisy decision tables,especially for highdimensional ones.In particular,we define a new reduct and then design an original attribute reduction method based on the distance measure between two intuitionistic fuzzy partitions.It should be noted that the intuitionistic fuzzypartitiondistance iswell-knownas aneffectivemeasure todetermine important attributes.More interestingly,an incremental formula is also developed to quickly compute the intuitionistic fuzzy partition distance in case when the decision table increases in the number of objects.This formula is then applied to construct an incremental attribute reduction algorithm for handling such dynamic tables.Besides,some experiments are conducted on real datasets to show that our method is far superior to the fuzzy rough set based methods in terms of the size of reduct and the classification accuracy.展开更多
Recently, deep convolutional neural networks (DCNNs) have achieved remarkable results in image classification tasks. Despite convolutional networks’ great successes, their training process relies on a large amount of...Recently, deep convolutional neural networks (DCNNs) have achieved remarkable results in image classification tasks. Despite convolutional networks’ great successes, their training process relies on a large amount of data prepared in advance, which is often challenging in real-world applications, such as streaming data and concept drift. For this reason, incremental learning (continual learning) has attracted increasing attention from scholars. However, incremental learning is associated with the challenge of catastrophic forgetting: the performance on previous tasks drastically degrades after learning a new task. In this paper, we propose a new strategy to alleviate catastrophic forgetting when neural networks are trained in continual domains. Specifically, two components are applied: data translation based on transfer learning and knowledge distillation. The former translates a portion of new data to reconstruct the partial data distribution of the old domain. The latter uses an old model as a teacher to guide a new model. The experimental results on three datasets have shown that our work can effectively alleviate catastrophic forgetting by a combination of the two methods aforementioned.展开更多
In the incremental sheet forming (ISF) process, springback is a very important factor that affects the quality of parts. Predicting and controlling springback accurately is essential for the design of the toolpath f...In the incremental sheet forming (ISF) process, springback is a very important factor that affects the quality of parts. Predicting and controlling springback accurately is essential for the design of the toolpath for ISF. A three-dimensional elasto-plastic finite element model (FEM) was developed to simulate the process and the simulated results were compared with those from the experiment. The springback angle was found to be in accordance with the experimental result, proving the FEM to be effective. A coupled artificial neural networks (ANN) and finite element method technique was developed to simulate and predict springback responses to changes in the processing parameters. A particle swarm optimization (PSO) algorithm was used to optimize the weights and thresholds of the neural network model. The neural network was trained using available FEM simulation data. The results showed that a more accurate prediction of s!oringback can be acquired using the FEM-PSONN model.展开更多
[ Objective] The aim of the research was to provide reference for reasonable application of nitrogen fertilizer for high yield.cultivation of hybrid rape cuhivar Youyan 9 and Youyan 10. [ Method] The net increment cha...[ Objective] The aim of the research was to provide reference for reasonable application of nitrogen fertilizer for high yield.cultivation of hybrid rape cuhivar Youyan 9 and Youyan 10. [ Method] The net increment changes of individual plant fresh weight and dry matter weight of Youyan 9 and Youyan 10 with different nitrogen application treatments were studied. [ Result] The differences among average fresh weight increments of individual plant and average dry matter weight increment of individual plant with different treatments reached 0. 01 extremely significant level. Fresh weight increment and dry matter weight net increment of individual plant declined gradually with the increase of nitrogen application. In growtheourse ,fresh weight net increment of individual plant increased firstly then decreased and the maximum was in beginning flowering stage, besides that dry matter net increment increased gradually and the maximum was in mature period. The correlations among fresh net increment, dry matter weight net increment and yield net increment were positive or extremely positive. [ Conclusion] Under experimental condition, when nitrogen application was 225 kg/hm^2, hybrid rape Yanyou 9 and Yanyou 10 with low erucic,low glucosinolate could obtain high yield.展开更多
We present a novel incremental algorithm for non-slicing floorplans based on the corner block list representation. The horizontal and vertical adjacency graphs are derived from the packing of the initial floorplanning...We present a novel incremental algorithm for non-slicing floorplans based on the corner block list representation. The horizontal and vertical adjacency graphs are derived from the packing of the initial floorplanning results. Based on the critical path and the accumulated slack distances we define,we choose the best position for insertion and do a series of operations incrementally, such as deleting modules, adding modules, and resizing modules quickly. This incremental floorplanning algorithm has a very high speed less than 1μm,which is one of the most important measures in this research. The algorithm preserves the original good performances on area and wire length. It can also supply other tools with good physical estimates for area, wire length, and other performance guidelines.展开更多
Reduced Q-matrix (Qr matrix) plays an important role in the rule space model (RSM) and the attribute hierarchy method (AHM). Based on the attribute hierarchy, a valid/invalid item is defined. The judgment method...Reduced Q-matrix (Qr matrix) plays an important role in the rule space model (RSM) and the attribute hierarchy method (AHM). Based on the attribute hierarchy, a valid/invalid item is defined. The judgment method of the valid/invalid item is developed on the relation between reachability matrix and valid items. And valid items are explained from the perspective of graph theory. An incremental augment algorithm for constructing Qr matrix is proposed based on the idea of incremental forward regression, and its validity is theoretically considered. Results of empirical tests are given in order to compare the performance of the incremental augment algo-rithm and the Tatsuoka algorithm upon the running time. Empirical evidence shows that the algorithm outper-forms the Tatsuoka algorithm, and the analysis of the two algorithms also show linear growth with respect to the number of valid items. Mathematical models with 10 attributes are built for the two algorithms by the linear regression analysis.展开更多
A method utilizing variable depth increments during incremental forming was proposed and then optimized based on numerical simulation and intelligent algorithm.Initially,a finite element method(FEM) model was set up a...A method utilizing variable depth increments during incremental forming was proposed and then optimized based on numerical simulation and intelligent algorithm.Initially,a finite element method(FEM) model was set up and then experimentally verified.And the relation between depth increment and the minimum thickness tmin as well as its location was analyzed through the FEM model.Afterwards,the variation of depth increments was defined.The designed part was divided into three areas according to the main deformation mechanism,with Di(i=1,2) representing the two dividing locations.And three different values of depth increment,Δzi(i=1,2,3) were utilized for the three areas,respectively.Additionally,an orthogonal test was established to research the relation between the five process parameters(D and Δz) and tmin as well as its location.The result shows that Δz2 has the most significant influence on the thickness distribution for the corresponding area is the largest one.Finally,a single evaluating indicator,taking into account of both tmin and its location,was formatted with a linear weighted model.And the process parameters were optimized through a genetic algorithm integrated with an artificial neural network based on the evaluating index.The result shows that the proposed algorithm is satisfactory for the optimization of variable depth increment.展开更多
The relationship between eco-hydrographic benefit of forest vegetation and climatic environmental factors is one of the focuses in the research on environmental protection and ecosystem countermeasures in Wetland. Th...The relationship between eco-hydrographic benefit of forest vegetation and climatic environmental factors is one of the focuses in the research on environmental protection and ecosystem countermeasures in Wetland. The runoff, sediment and soil moisture rate dynamics in Robinia pseudoacacia plantation and its clearcut area were investigated in the natural runoff experiment plots in Yellow River Delta Wet- land, Shandong Province, China. The correlation of height increment ofR. pseudoacacia with nine climate factors such as light, water, heat, etc. was analyzed by stepwise regression analysis. The results showed that the amounts of runoff and sediment in clearcut area of R. pseudoacacia were 53.9%-150.8% and 172.8%-387.1% higher than that in Robinia pseudoacacia plantation, respectively. The runoff peak value in R. pseudoacacia stand was obviously lower than that in clerarcut area, meantime, the occurrence of runoffpeak in R. pseudoacacia stand was 25 min later than in its clerarcut area. The soil moisture rates in R. pseudoacacia stand and its clearcut varied periodically with annual rainfall precipitation in both dry season and humid season. The annual mean soil moisture rate in R. pseudoacacia stand was 23.3%-25.6% higher than that in its clearcut area. Meanwhile, a regression model reflecting the correlation between the height increment of R. pseudoacacia and climatic factors was developed by stepwise regression procedure method. It showed that the light was the most important factor for the height increment ofR. pseudoacacia, followed by water and heat factors.展开更多
基金financed by the National Science Centre,Poland,under project No.2019/35/B/NZ8/01381 entitled"Impact of invasive tree species on ecosystem services:plant biodiversity,carbon and nitrogen cycling and climate regulation"by the Institute of Dendrology,Polish Academy of Sciences。
文摘Prunus serotina and Robinia pseudoacacia are the most widespread invasive trees in Central Europe.In addition,according to climate models,decreased growth of many economically and ecologically important native trees will likely be observed in the future.We aimed to assess the impact of these two neophytes,which differ in the biomass range and nitrogen-fixing abilities observed in Central European conditions,on the relative aboveground biomass increments of native oaks Qucrcus robur and Q.petraea and Scots pine Pinus sylvestris.We aimed to increase our understanding of the relationship between facilitation and competition between woody alien species and overstory native trees.We established 72 circular plots(0.05 ha)in two different forest habitat types and stands varying in age in western Poland.We chose plots with different abundances of the studied neophytes to determine how effects scaled along the quantitative invasion gradient.Furthermore,we collected growth cores of the studied native species,and we calculated aboveground biomass increments at the tree and stand levels.Then,we used generalized linear mixed-effects models to assess the impact of invasive species abundances on relative aboveground biomass increments of native tree species.We did not find a biologically or statistically significant impact of invasive R.pseudoacacia or P.serotina on the relative aboveground,biomass increments of native oaks and pines along the quantitative gradient of invader biomass or on the proportion of total stand biomass accounted for by invaders.The neophytes did not act as native tree growth stimulators but also did not compete with them for resources,which would escalate the negative impact of climate change on pines and oaks.The neophytes should not significantly modify the carbon sequestration capacity of the native species.Our work combines elements of the per capita effect of invasion with research on mixed forest management.
基金supported by the Innovation Scientists and Technicians Troop Construction Projects of Henan Province(224000510002)。
文摘Time-Sensitive Network(TSN)with deterministic transmission capability is increasingly used in many emerging fields.It mainly guarantees the Quality of Service(QoS)of applications with strict requirements on time and security.One of the core features of TSN is traffic scheduling with bounded low delay in the network.However,traffic scheduling schemes in TSN are usually synthesized offline and lack dynamism.To implement incremental scheduling of newly arrived traffic in TSN,we propose a Dynamic Response Incremental Scheduling(DR-IS)method for time-sensitive traffic and deploy it on a software-defined time-sensitive network architecture.Under the premise of meeting the traffic scheduling requirements,we adopt two modes,traffic shift and traffic exchange,to dynamically adjust the time slot injection position of the traffic in the original scheme,and determine the sending offset time of the new timesensitive traffic to minimize the global traffic transmission jitter.The evaluation results show that DRIS method can effectively control the large increase of traffic transmission jitter in incremental scheduling without affecting the transmission delay,thus realizing the dynamic incremental scheduling of time-sensitive traffic in TSN.
基金supported by Fundamental Research Program of Shanxi Province(No.20210302123444)the Research Project at the College Level of China Institute of Labor Relations(No.23XYJS018)+2 种基金the ICH Digitalization and Multi-Source Information Fusion Fujian Provincial University Engineering Research Center 2022 Open Fund Project(G3-KF2207)the China University Industry University Research Innovation Fund(No.2021FNA02009)the Key R&D Program(International Science and Technology Cooperation Project)of Shanxi Province China(No.201903D421003).
文摘Currently,distributed routing protocols are constrained by offering a single path between any pair of nodes,thereby limiting the potential throughput and overall network performance.This approach not only restricts the flow of data but also makes the network susceptible to failures in case the primary path is disrupted.In contrast,routing protocols that leverage multiple paths within the network offer a more resilient and efficient solution.Multipath routing,as a fundamental concept,surpasses the limitations of traditional shortest path first protocols.It not only redirects traffic to unused resources,effectively mitigating network congestion,but also ensures load balancing across the network.This optimization significantly improves network utilization and boosts the overall performance,making it a widely recognized efficient method for enhancing network reliability.To further strengthen network resilience against failures,we introduce a routing scheme known as Multiple Nodes with at least Two Choices(MNTC).This innovative approach aims to significantly enhance network availability by providing each node with at least two routing choices.By doing so,it not only reduces the dependency on a single path but also creates redundant paths that can be utilized in case of failures,thereby enhancing the overall resilience of the network.To ensure the optimal placement of nodes,we propose three incremental deployment algorithms.These algorithms carefully select the most suitable set of nodes for deployment,taking into account various factors such as node connectivity,traffic patterns,and network topology.By deployingMNTCon a carefully chosen set of nodes,we can significantly enhance network reliability without the need for a complete overhaul of the existing infrastructure.We have conducted extensive evaluations of MNTC in diverse topological spaces,demonstrating its effectiveness in maintaining high network availability with minimal path stretch.The results are impressive,showing that even when implemented on just 60%of nodes,our incremental deployment method significantly boosts network availability.This underscores the potential of MNTC in enhancing network resilience and performance,making it a viable solution for modern networks facing increasing demands and complexities.The algorithms OSPF,TBFH,DC and LFC perform fast rerouting based on strict conditions,while MNTC is not restricted by these conditions.In five real network topologies,the average network availability ofMNTCis improved by 14.68%,6.28%,4.76%and 2.84%,respectively,compared with OSPF,TBFH,DC and LFC.
基金The National Forestry Commission of Mexico and The Mexican National Council for Science and Technology(CONAFOR-CONACYT-115900)。
文摘Multispecies forests have received increased scientific attention,driven by the hypothesis that biodiversity improves ecological resilience.However,a greater species diversity presents challenges for forest management and research.Our study aims to develop basal area growth models for tree species cohorts.The analysis is based on a dataset of 423 permanent plots(2,500 m^(2))located in temperate forests in Durango,Mexico.First,we define tree species cohorts based on individual and neighborhood-based variables using a combination of principal component and cluster analyses.Then,we estimate the basal area increment of each cohort through the generalized additive model to describe the effect of tree size,competition,stand density and site quality.The principal component and cluster analyses assign a total of 37 tree species to eight cohorts that differed primarily with regard to the distribution of tree size and vertical position within the community.The generalized additive models provide satisfactory estimates of tree growth for the species cohorts,explaining between 19 and 53 percent of the total variation of basal area increment,and highlight the following results:i)most cohorts show a"rise-and-fall"effect of tree size on tree growth;ii)surprisingly,the competition index"basal area of larger trees"had showed a positive effect in four of the eight cohorts;iii)stand density had a negative effect on basal area increment,though the effect was minor in medium-and high-density stands,and iv)basal area growth was positively correlated with site quality except for an oak cohort.The developed species cohorts and growth models provide insight into their particular ecological features and growth patterns that may support the development of sustainable management strategies for temperate multispecies forests.
基金This work was funded by the National Natural Science Foundation of China Nos.U22A2099,61966009,62006057the Graduate Innovation Program No.YCSW2022286.
文摘Humans are experiencing the inclusion of artificial agents in their lives,such as unmanned vehicles,service robots,voice assistants,and intelligent medical care.If the artificial agents cannot align with social values or make ethical decisions,they may not meet the expectations of humans.Traditionally,an ethical decision-making framework is constructed by rule-based or statistical approaches.In this paper,we propose an ethical decision-making framework based on incremental ILP(Inductive Logic Programming),which can overcome the brittleness of rule-based approaches and little interpretability of statistical approaches.As the current incremental ILP makes it difficult to solve conflicts,we propose a novel ethical decision-making framework considering conflicts in this paper,which adopts our proposed incremental ILP system.The framework consists of two processes:the learning process and the deduction process.The first process records bottom clauses with their score functions and learns rules guided by the entailment and the score function.The second process obtains an ethical decision based on the rules.In an ethical scenario about chatbots for teenagers’mental health,we verify that our framework can learn ethical rules and make ethical decisions.Besides,we extract incremental ILP from the framework and compare it with the state-of-the-art ILP systems based on ASP(Answer Set Programming)focusing on conflict resolution.The results of comparisons show that our proposed system can generate better-quality rules than most other systems.
文摘The visions of Industry 4.0 and 5.0 have reinforced the industrial environment.They have also made artificial intelligence incorporated as a major facilitator.Diagnosing machine faults has become a solid foundation for automatically recognizing machine failure,and thus timely maintenance can ensure safe operations.Transfer learning is a promising solution that can enhance the machine fault diagnosis model by borrowing pre-trained knowledge from the source model and applying it to the target model,which typically involves two datasets.In response to the availability of multiple datasets,this paper proposes using selective and adaptive incremental transfer learning(SA-ITL),which fuses three algorithms,namely,the hybrid selective algorithm,the transferability enhancement algorithm,and the incremental transfer learning algorithm.It is a selective algorithm that enables selecting and ordering appropriate datasets for transfer learning and selecting useful knowledge to avoid negative transfer.The algorithm also adaptively adjusts the portion of training data to balance the learning rate and training time.The proposed algorithm is evaluated and analyzed using ten benchmark datasets.Compared with other algorithms from existing works,SA-ITL improves the accuracy of all datasets.Ablation studies present the accuracy enhancements of the SA-ITL,including the hybrid selective algorithm(1.22%-3.82%),transferability enhancement algorithm(1.91%-4.15%),and incremental transfer learning algorithm(0.605%-2.68%).These also show the benefits of enhancing the target model with heterogeneous image datasets that widen the range of domain selection between source and target domains.
文摘We investigated the parametric optimization on incremental sheet forming of stainless steel using Grey Relational Analysis(GRA) coupled with Principal Component Analysis(PCA). AISI 316L stainless steel sheets were used to develop double wall angle pyramid with aid of tungsten carbide tool. GRA coupled with PCA was used to plan the experiment conditions. Control factors such as Tool Diameter(TD), Step Depth(SD), Bottom Wall Angle(BWA), Feed Rate(FR) and Spindle Speed(SS) on Top Wall Angle(TWA) and Top Wall Angle Surface Roughness(TWASR) have been studied. Wall angle increases with increasing tool diameter due to large contact area between tool and workpiece. As the step depth, feed rate and spindle speed increase,TWASR decreases with increasing tool diameter. As the step depth increasing, the hydrostatic stress is raised causing severe cracks in the deformed surface. Hence it was concluded that the proposed hybrid method was suitable for optimizing the factors and response.
基金the National Natural Science Foun-dation of China(Nos.61471263,61872267 and U21B2024)the Natural Science Foundation of Tianjin,China(No.16JCZDJC31100)Tianjin University Innovation Foundation(No.2021XZC0024).
文摘Hyperspectral images typically have high spectral resolution but low spatial resolution,which impacts the reliability and accuracy of subsequent applications,for example,remote sensingclassification and mineral identification.But in traditional methods via deep convolution neural net-works,indiscriminately extracting and fusing spectral and spatial features makes it challenging toutilize the differentiated information across adjacent spectral channels.Thus,we proposed a multi-branch interleaved iterative upsampling hyperspectral image super-resolution reconstruction net-work(MIIUSR)to address the above problems.We reinforce spatial feature extraction by integrat-ing detailed features from different receptive fields across adjacent channels.Furthermore,we pro-pose an interleaved iterative upsampling process during the reconstruction stage,which progres-sively fuses incremental information among adjacent frequency bands.Additionally,we add twoparallel three dimensional(3D)feature extraction branches to the backbone network to extractspectral and spatial features of varying granularity.We further enhance the backbone network’sconstruction results by leveraging the difference between two dimensional(2D)channel-groupingspatial features and 3D multi-granularity features.The results obtained by applying the proposednetwork model to the CAVE test set show that,at a scaling factor of×4,the peak signal to noiseratio,spectral angle mapping,and structural similarity are 37.310 dB,3.525 and 0.9438,respec-tively.Besides,extensive experiments conducted on the Harvard and Foster datasets demonstratethe superior potential of the proposed model in hyperspectral super-resolution reconstruction.
文摘To improve the prediction accuracy of chaotic time series and reconstruct a more reasonable phase space structure of the prediction network,we propose a convolutional neural network-long short-term memory(CNN-LSTM)prediction model based on the incremental attention mechanism.Firstly,a traversal search is conducted through the traversal layer for finite parameters in the phase space.Then,an incremental attention layer is utilized for parameter judgment based on the dimension weight criteria(DWC).The phase space parameters that best meet DWC are selected and fed into the input layer.Finally,the constructed CNN-LSTM network extracts spatio-temporal features and provides the final prediction results.The model is verified using Logistic,Lorenz,and sunspot chaotic time series,and the performance is compared from the two dimensions of prediction accuracy and network phase space structure.Additionally,the CNN-LSTM network based on incremental attention is compared with long short-term memory(LSTM),convolutional neural network(CNN),recurrent neural network(RNN),and support vector regression(SVR)for prediction accuracy.The experiment results indicate that the proposed composite network model possesses enhanced capability in extracting temporal features and achieves higher prediction accuracy.Also,the algorithm to estimate the phase space parameter is compared with the traditional CAO,false nearest neighbor,and C-C,three typical methods for determining the chaotic phase space parameters.The experiments reveal that the phase space parameter estimation algorithm based on the incremental attention mechanism is superior in prediction accuracy compared with the traditional phase space reconstruction method in five networks,including CNN-LSTM,LSTM,CNN,RNN,and SVR.
基金support from the Strategic Priority Research Program of the Chinese Academy of Sciences under Grant No.XDA27000000.
文摘Deep Convolution Neural Networks(DCNNs)can capture discriminative features from large datasets.However,how to incrementally learn new samples without forgetting old ones and recognize novel classes that arise in the dynamically changing world,e.g.,classifying newly discovered fish species,remains an open problem.We address an even more challenging and realistic setting of this problem where new class samples are insufficient,i.e.,Few-Shot Class-Incremental Learning(FSCIL).Current FSCIL methods augment the training data to alleviate the overfitting of novel classes.By contrast,we propose Filter Bank Networks(FBNs)that augment the learnable filters to capture fine-detailed features for adapting to future new classes.In the forward pass,FBNs augment each convolutional filter to a virtual filter bank containing the canonical one,i.e.,itself,and multiple transformed versions.During back-propagation,FBNs explicitly stimulate fine-detailed features to emerge and collectively align all gradients of each filter bank to learn the canonical one.FBNs capture pattern variants that do not yet exist in the pretraining session,thus making it easy to incorporate new classes in the incremental learning phase.Moreover,FBNs introduce model-level prior knowledge to efficiently utilize the limited few-shot data.Extensive experiments on MNIST,CIFAR100,CUB200,andMini-ImageNet datasets show that FBNs consistently outperformthe baseline by a significantmargin,reporting new state-of-the-art FSCIL results.In addition,we contribute a challenging FSCIL benchmark,Fishshot1K,which contains 8261 underwater images covering 1000 ocean fish species.The code is included in the supplementary materials.
基金jointly sponsored by the Shenzhen Science and Technology Innovation Commission (Grant No. KCXFZ20201221173610028)the key program of the National Natural Science Foundation of China (Grant No. 42130605)
文摘In the traditional incremental analysis update(IAU)process,all analysis increments are treated as constant forcing in a model’s prognostic equations over a certain time window.This approach effectively reduces high-frequency oscillations introduced by data assimilation.However,as different scales of increments have unique evolutionary speeds and life histories in a numerical model,the traditional IAU scheme cannot fully meet the requirements of short-term forecasting for the damping of high-frequency noise and may even cause systematic drifts.Therefore,a multi-scale IAU scheme is proposed in this paper.Analysis increments were divided into different scale parts using a spatial filtering technique.For each scale increment,the optimal relaxation time in the IAU scheme was determined by the skill of the forecasting results.Finally,different scales of analysis increments were added to the model integration during their optimal relaxation time.The multi-scale IAU scheme can effectively reduce the noise and further improve the balance between large-scale and small-scale increments in the model initialization stage.To evaluate its performance,several numerical experiments were conducted to simulate the path and intensity of Typhoon Mangkhut(2018)and showed that:(1)the multi-scale IAU scheme had an obvious effect on noise control at the initial stage of data assimilation;(2)the optimal relaxation time for large-scale and small-scale increments was estimated as 6 h and 3 h,respectively;(3)the forecast performance of the multi-scale IAU scheme in the prediction of Typhoon Mangkhut(2018)was better than that of the traditional IAU scheme.The results demonstrate the superiority of the multi-scale IAU scheme.
基金Supported by SJTU-HUAWEI TECH Cybersecurity Innovation Lab。
文摘Background With the development of information technology,there is a significant increase in the number of network traffic logs mixed with various types of cyberattacks.Traditional intrusion detection systems(IDSs)are limited in detecting new inconstant patterns and identifying malicious traffic traces in real time.Therefore,there is an urgent need to implement more effective intrusion detection technologies to protect computer security.Methods In this study,we designed a hybrid IDS by combining our incremental learning model(KANSOINN)and active learning to learn new log patterns and detect various network anomalies in real time.Conclusions Experimental results on the NSLKDD dataset showed that KAN-SOINN can be continuously improved and effectively detect malicious logs.Meanwhile,comparative experiments proved that using a hybrid query strategy in active learning can improve the model learning efficiency.
基金funded by Hanoi University of Industry under Grant Number 27-2022-RD/HD-DHCN (URL:https://www.haui.edu.vn/).
文摘Attribute reduction,also known as feature selection,for decision information systems is one of the most pivotal issues in machine learning and data mining.Approaches based on the rough set theory and some extensions were proved to be efficient for dealing with the problemof attribute reduction.Unfortunately,the intuitionistic fuzzy sets based methods have not received much interest,while these methods are well-known as a very powerful approach to noisy decision tables,i.e.,data tables with the low initial classification accuracy.Therefore,this paper provides a novel incremental attribute reductionmethod to dealmore effectivelywith noisy decision tables,especially for highdimensional ones.In particular,we define a new reduct and then design an original attribute reduction method based on the distance measure between two intuitionistic fuzzy partitions.It should be noted that the intuitionistic fuzzypartitiondistance iswell-knownas aneffectivemeasure todetermine important attributes.More interestingly,an incremental formula is also developed to quickly compute the intuitionistic fuzzy partition distance in case when the decision table increases in the number of objects.This formula is then applied to construct an incremental attribute reduction algorithm for handling such dynamic tables.Besides,some experiments are conducted on real datasets to show that our method is far superior to the fuzzy rough set based methods in terms of the size of reduct and the classification accuracy.
文摘Recently, deep convolutional neural networks (DCNNs) have achieved remarkable results in image classification tasks. Despite convolutional networks’ great successes, their training process relies on a large amount of data prepared in advance, which is often challenging in real-world applications, such as streaming data and concept drift. For this reason, incremental learning (continual learning) has attracted increasing attention from scholars. However, incremental learning is associated with the challenge of catastrophic forgetting: the performance on previous tasks drastically degrades after learning a new task. In this paper, we propose a new strategy to alleviate catastrophic forgetting when neural networks are trained in continual domains. Specifically, two components are applied: data translation based on transfer learning and knowledge distillation. The former translates a portion of new data to reconstruct the partial data distribution of the old domain. The latter uses an old model as a teacher to guide a new model. The experimental results on three datasets have shown that our work can effectively alleviate catastrophic forgetting by a combination of the two methods aforementioned.
基金Project(50175034) supported by the National Natural Science Foundation of China
文摘In the incremental sheet forming (ISF) process, springback is a very important factor that affects the quality of parts. Predicting and controlling springback accurately is essential for the design of the toolpath for ISF. A three-dimensional elasto-plastic finite element model (FEM) was developed to simulate the process and the simulated results were compared with those from the experiment. The springback angle was found to be in accordance with the experimental result, proving the FEM to be effective. A coupled artificial neural networks (ANN) and finite element method technique was developed to simulate and predict springback responses to changes in the processing parameters. A particle swarm optimization (PSO) algorithm was used to optimize the weights and thresholds of the neural network model. The neural network was trained using available FEM simulation data. The results showed that a more accurate prediction of s!oringback can be acquired using the FEM-PSONN model.
基金Funds for Transformation of Scientific and Technological Achievements of Ministry of Science and Technology of China (04EFN215200268)the Nomarch Special Foundation for the Excellent Science and Technology Talents of Guizhou Province[(2005(77)]the Science and Technology Program of Guizhou Province[(2006)6001]~~
文摘[ Objective] The aim of the research was to provide reference for reasonable application of nitrogen fertilizer for high yield.cultivation of hybrid rape cuhivar Youyan 9 and Youyan 10. [ Method] The net increment changes of individual plant fresh weight and dry matter weight of Youyan 9 and Youyan 10 with different nitrogen application treatments were studied. [ Result] The differences among average fresh weight increments of individual plant and average dry matter weight increment of individual plant with different treatments reached 0. 01 extremely significant level. Fresh weight increment and dry matter weight net increment of individual plant declined gradually with the increase of nitrogen application. In growtheourse ,fresh weight net increment of individual plant increased firstly then decreased and the maximum was in beginning flowering stage, besides that dry matter net increment increased gradually and the maximum was in mature period. The correlations among fresh net increment, dry matter weight net increment and yield net increment were positive or extremely positive. [ Conclusion] Under experimental condition, when nitrogen application was 225 kg/hm^2, hybrid rape Yanyou 9 and Yanyou 10 with low erucic,low glucosinolate could obtain high yield.
文摘We present a novel incremental algorithm for non-slicing floorplans based on the corner block list representation. The horizontal and vertical adjacency graphs are derived from the packing of the initial floorplanning results. Based on the critical path and the accumulated slack distances we define,we choose the best position for insertion and do a series of operations incrementally, such as deleting modules, adding modules, and resizing modules quickly. This incremental floorplanning algorithm has a very high speed less than 1μm,which is one of the most important measures in this research. The algorithm preserves the original good performances on area and wire length. It can also supply other tools with good physical estimates for area, wire length, and other performance guidelines.
基金Supported by the National Natural Science Foundation of China (30860084,60673014,60263005)the Backbone Young Teachers Foundation of Fujian Normal University(2008100244)the Department of Education Foundation of Fujian Province (ZA09047)~~
文摘Reduced Q-matrix (Qr matrix) plays an important role in the rule space model (RSM) and the attribute hierarchy method (AHM). Based on the attribute hierarchy, a valid/invalid item is defined. The judgment method of the valid/invalid item is developed on the relation between reachability matrix and valid items. And valid items are explained from the perspective of graph theory. An incremental augment algorithm for constructing Qr matrix is proposed based on the idea of incremental forward regression, and its validity is theoretically considered. Results of empirical tests are given in order to compare the performance of the incremental augment algo-rithm and the Tatsuoka algorithm upon the running time. Empirical evidence shows that the algorithm outper-forms the Tatsuoka algorithm, and the analysis of the two algorithms also show linear growth with respect to the number of valid items. Mathematical models with 10 attributes are built for the two algorithms by the linear regression analysis.
文摘A method utilizing variable depth increments during incremental forming was proposed and then optimized based on numerical simulation and intelligent algorithm.Initially,a finite element method(FEM) model was set up and then experimentally verified.And the relation between depth increment and the minimum thickness tmin as well as its location was analyzed through the FEM model.Afterwards,the variation of depth increments was defined.The designed part was divided into three areas according to the main deformation mechanism,with Di(i=1,2) representing the two dividing locations.And three different values of depth increment,Δzi(i=1,2,3) were utilized for the three areas,respectively.Additionally,an orthogonal test was established to research the relation between the five process parameters(D and Δz) and tmin as well as its location.The result shows that Δz2 has the most significant influence on the thickness distribution for the corresponding area is the largest one.Finally,a single evaluating indicator,taking into account of both tmin and its location,was formatted with a linear weighted model.And the process parameters were optimized through a genetic algorithm integrated with an artificial neural network based on the evaluating index.The result shows that the proposed algorithm is satisfactory for the optimization of variable depth increment.
基金the National "11th Five Year" Plan of Science and technology (2006BAD26B06,2006BAD03A1205) Ecological Restore Project of Water Resources Ministry of China (2006-2008)
文摘The relationship between eco-hydrographic benefit of forest vegetation and climatic environmental factors is one of the focuses in the research on environmental protection and ecosystem countermeasures in Wetland. The runoff, sediment and soil moisture rate dynamics in Robinia pseudoacacia plantation and its clearcut area were investigated in the natural runoff experiment plots in Yellow River Delta Wet- land, Shandong Province, China. The correlation of height increment ofR. pseudoacacia with nine climate factors such as light, water, heat, etc. was analyzed by stepwise regression analysis. The results showed that the amounts of runoff and sediment in clearcut area of R. pseudoacacia were 53.9%-150.8% and 172.8%-387.1% higher than that in Robinia pseudoacacia plantation, respectively. The runoff peak value in R. pseudoacacia stand was obviously lower than that in clerarcut area, meantime, the occurrence of runoffpeak in R. pseudoacacia stand was 25 min later than in its clerarcut area. The soil moisture rates in R. pseudoacacia stand and its clearcut varied periodically with annual rainfall precipitation in both dry season and humid season. The annual mean soil moisture rate in R. pseudoacacia stand was 23.3%-25.6% higher than that in its clearcut area. Meanwhile, a regression model reflecting the correlation between the height increment of R. pseudoacacia and climatic factors was developed by stepwise regression procedure method. It showed that the light was the most important factor for the height increment ofR. pseudoacacia, followed by water and heat factors.