In order to determine an appropriate sampling strategy for the effective conservation of wild soybean (Glycine soja Sieb. et Zucc.) in China, a natural population from Jiangwan Airport in Shanghai was studied for its ...In order to determine an appropriate sampling strategy for the effective conservation of wild soybean (Glycine soja Sieb. et Zucc.) in China, a natural population from Jiangwan Airport in Shanghai was studied for its genetic diversity through the inter-simple sequence repeat (ISSR) marker analysis of a sample set consisting of 100 randomly collected individuals. A relatively large genetic diversity was detected among the samples based on estimation of DNA products amplified from 15 selected ISSR primers, with the similarity coefficient varying from 0.17 to 0.89. The mean expected heterozygosity (He) was 0.171 4 per locus, and Shannon index (1) was 0.271 4. The Principal Coordinate Analysis (PCA) further indicated that genetic diversity of the Jiangwan wild soybean population was not evenly distributed, instead, was presented by a mosaic or clustered distribution pattern. Correlation study between genetic diversity and number of samples demonstrated that genetic diversity increased dramatically with the increase of number of samples within 40 individuals, but the increase became slow and rapidly reached a plateau when more than 40 individuals were included in the analysis. It is concluded that (i) a sample set of approximately 35-45 individuals should be included to represent possibly high genetic diversity when conservation of a wild soybean population ex situ is undertaken; and (ii) collection of wild soybean samples should be spread out as wide as possible within a population, and a certain distance should be kept as intervals among individuals for sampling.展开更多
Regional modeling of landslide hazards is an essential tool for the assessment and management of risk in mountain environments.Previous studies that have focused on modeling earthquake-triggered landslides report high...Regional modeling of landslide hazards is an essential tool for the assessment and management of risk in mountain environments.Previous studies that have focused on modeling earthquake-triggered landslides report high prediction accuracies.However,it is common to use a validation strategy with an equal number of landslide and non-landslide samples,scattered homogeneously across the study area.Consequently,there are overestimations in the epicenter area,and the spatial pattern of modeled locations does not agree well with real events.In order to improve landslide hazard mapping,we proposed a spatially heterogeneous non-landslide sampling strategy by considering local ratios of landslide to non-landslide area.Coseismic landslides triggered by the 2008 Wenchuan Earthquake on the eastern Tibetan Plateau were used as an example.To assess the performance of the new strategy,we trained two random forest models that shared the same hyperparameters.The frst was trained using samples from the new heterogeneous strategy,and the second used the traditional approach.In each case the spatial match between modeled and measured(interpreted)landslides was examined by scatterplot,with a 2 km-by-2 km fshnet.Although the traditional approach achieved higher AUC_(ROC)(0.95)accuracy than the proposed one(0.85),the coefcient of determination(R^(2))for the new strategy(0.88)was much higher than for the traditional strategy(0.55).Our results indicate that the proposed strategy outperforms the traditional one when comparing against landslide inventory data.Our work demonstrates that higher prediction accuracies in landslide hazard modeling may be deceptive,and validation of the modeled spatial pattern should be prioritized.The proposed method may also be used to improve the mapping of precipitation-induced landslides.Application of the proposed strategy could beneft precise assessment of landslide risks in mountain environments.展开更多
Existing unsupervised domain adaptation approaches primarily focus on reducing the data distribution gap between the source and target domains,often neglecting the influence of class information,leading to inaccurate ...Existing unsupervised domain adaptation approaches primarily focus on reducing the data distribution gap between the source and target domains,often neglecting the influence of class information,leading to inaccurate alignment outcomes.Guided by this observation,this paper proposes an adaptive inter-intra-domain discrepancy method to quantify the intra-class and inter-class discrepancies between the source and target domains.Furthermore,an adaptive factor is introduced to dynamically assess their relative importance.Building upon the proposed adaptive inter-intradomain discrepancy approach,we develop an inter-intradomain alignment network with a class-aware sampling strategy(IDAN-CSS)to distill the feature representations.The classaware sampling strategy,integrated within IDAN-CSS,facilitates more efficient training.Through multiple transfer diagnosis cases,we comprehensively demonstrate the feasibility and effectiveness of the proposed IDAN-CSS model.展开更多
The aim of this study is to investigate the impacts of the sampling strategy of landslide and non-landslide on the performance of landslide susceptibility assessment(LSA).The study area is the Feiyun catchment in Wenz...The aim of this study is to investigate the impacts of the sampling strategy of landslide and non-landslide on the performance of landslide susceptibility assessment(LSA).The study area is the Feiyun catchment in Wenzhou City,Southeast China.Two types of landslides samples,combined with seven non-landslide sampling strategies,resulted in a total of 14 scenarios.The corresponding landslide susceptibility map(LSM)for each scenario was generated using the random forest model.The receiver operating characteristic(ROC)curve and statistical indicators were calculated and used to assess the impact of the dataset sampling strategy.The results showed that higher accuracies were achieved when using the landslide core as positive samples,combined with non-landslide sampling from the very low zone or buffer zone.The results reveal the influence of landslide and non-landslide sampling strategies on the accuracy of LSA,which provides a reference for subsequent researchers aiming to obtain a more reasonable LSM.展开更多
A total of 892 individuals sampled from a wild soybean population in a natural reserve near the Yellow River estuary located in Kenli of Shandong Province(China)were investigated.Seventeen SSR(simple sequence repeat)p...A total of 892 individuals sampled from a wild soybean population in a natural reserve near the Yellow River estuary located in Kenli of Shandong Province(China)were investigated.Seventeen SSR(simple sequence repeat)primer pairs from cultivated soybeans were used to estimate the genetic diversity of the population and its variation pattern versus changes of the sample size(sub-samples),in addition to investigating the fine-scale spatial genetic structure within the population.The results showed relatively high genetic diversity of the population with the mean value of allele number(A)being 2.88,expected heterozygosity(He)0.431,Shannon diversity index(I)0.699,and percentage of poly-morphic loci(P)100%.Sub-samples of different sizes(ten groups)were randomly drawn from the population and their genetic diversity was calculated by computer simulation.The regression model of the four diversity indexes with the change of sample sizes was computed.As a result,27-52 individuals can reach 95%of total genetic variability of the population.Spatial autocorrelation analysis revealed that the genetic patch size of this wild soybean population is about 18 m.The study provided a scientific basis for the sampling strategy of wild soybean populations.展开更多
In recent years,semantic segmentation on 3D point cloud data has attracted much attention.Unlike 2D images where pixels distribute regularly in the image domain,3D point clouds in non-Euclidean space are irregular and...In recent years,semantic segmentation on 3D point cloud data has attracted much attention.Unlike 2D images where pixels distribute regularly in the image domain,3D point clouds in non-Euclidean space are irregular and inherently sparse.Therefore,it is very difficult to extract long-range contexts and effectively aggregate local features for semantic segmentation in 3D point cloud space.Most current methods either focus on local feature aggregation or long-range context dependency,but fail to directly establish a global-local feature extractor to complete the point cloud semantic segmentation tasks.In this paper,we propose a Transformer-based stratified graph convolutional network(SGT-Net),which enlarges the effective receptive field and builds direct long-range dependency.Specifically,we first propose a novel dense-sparse sampling strategy that provides dense local vertices and sparse long-distance vertices for subsequent graph convolutional network(GCN).Secondly,we propose a multi-key self-attention mechanism based on the Transformer to further weight augmentation for crucial neighboring relationships and enlarge the effective receptive field.In addition,to further improve the efficiency of the network,we propose a similarity measurement module to determine whether the neighborhood near the center point is effective.We demonstrate the validity and superiority of our method on the S3DIS and ShapeNet datasets.Through ablation experiments and segmentation visualization,we verify that the SGT model can improve the performance of the point cloud semantic segmentation.展开更多
Aim To develop a method to estimate population pharmacokinetic parameters with the limited sampling time points provided clinically during therapeutic drug monitoring. Methods Various simulations were attempted using ...Aim To develop a method to estimate population pharmacokinetic parameters with the limited sampling time points provided clinically during therapeutic drug monitoring. Methods Various simulations were attempted using a one-compartment open model with the first order absorption to determine PK parameter estimates with different sampling strategies as a validation of the method. The estimated parameters were further verified by comparing to the observed values. Results The samples collected at the single time point close to the non-informative sampling time point designed by this method led to bias and inaccurate parameter estimations. Furthermore, the relationship between the estimated non-informative sampling time points and the values of the parameter was examined. The non-informative sampling time points have been developed under some typical occasions and the results were plotted to show the tendency. As a result, one non-informative time point was demonstrated to be appropriate for clearance and two for both volume of distribution and constant of absorption in the present study. It was found that the estimates of the non-informative sampling time points developed in the method increase with increases of volume of distribution and the decrease of clearance and constant of absorption. Conclusion A rational sampling strategy during therapeutic drug monitoring can be established using the method present in the study.展开更多
The quality of debris flow susceptibility mapping varies with sampling strategies. This paper aims at comparing three sampling strategies and determining the optimal one to sample the debris flow watersheds. The three...The quality of debris flow susceptibility mapping varies with sampling strategies. This paper aims at comparing three sampling strategies and determining the optimal one to sample the debris flow watersheds. The three sampling strategies studied were the centroid of the scarp area(COSA), the centroid of the flowing area(COFA), and the centroid of the accumulation area(COAA) of debris flow watersheds. An inventory consisting of 150 debris flow watersheds and 12 conditioning factors were prepared for research. Firstly, the information gain ratio(IGR) method was used to analyze the predictive ability of the conditioning factors. Subsequently, 12 conditioning factors were involved in the modeling of artificial neural network(ANN), random forest(RF) and support vector machine(SVM). Then, the receiver operating characteristic curves(ROC) and the area under curves(AUC) were used to evaluate the model performance. Finally, a scoring system was used to score the quality of the debris flow susceptibility maps. Samples obtained from the accumulation area have the strongest predictive ability and can make the models achieve the best performance. The AUC values corresponding to the best model performance on the validation dataset were 0.861, 0.804 and 0.856 for SVM, ANN and RF respectively. The sampling strategy of the centroid of the scarp area is optimal with the highest quality of debris flow susceptibility maps having scores of 373470, 393241 and 362485 for SVM, ANN and RF respectively.展开更多
This study was aimed at investigating the sampling strategies for 2 types of figures: 3-D cubes and human faces. The research was focused on: (a) from where the sampling process started; (b) in what order the figures&...This study was aimed at investigating the sampling strategies for 2 types of figures: 3-D cubes and human faces. The research was focused on: (a) from where the sampling process started; (b) in what order the figures' features were sampled. The study consisted of 2 experiments: (a) sampling strategies for 3-D cubes; (b) sampling strategies for human faces. The results showed that: (a), for 3-D cubes, the first sampling was mostly located at the outline parts, rarely at the center part; while for human faces, the first sampling was mostly located at the hair and outline parts, rarely at the mouth or cheek parts, in most cases, the first sampling-position had no significant effects on cognitive performance and that (b), the sampling order, both for 3-D cubes and for human faces, was determined by the degree of difference among the sampled-features.展开更多
Field nutrient distribution maps obtained from the study on soil variations within fields are the basis of precision agriculture. The quality of these maps for management depends on the accuracy of the predicted value...Field nutrient distribution maps obtained from the study on soil variations within fields are the basis of precision agriculture. The quality of these maps for management depends on the accuracy of the predicted values, which depends on the initial sampling. To produce reliable predictions efficiently the minimal sampling size and combination should be decided firstly, which could avoid the misspent funds for field sampling work. A 7.9 hectare silage field close to the Agricultural Research institute at Hillsborough, Northern Ireland, was selected for the study. Soil samples were collected from the field at 25 m intervals in a rectangular grid to provide a database of selected soil properties. Different data combinations were subsequently abstracted from this database for comparison purposes, and ordinary kriging used to produce interpolated soil maps. These predicted data groups were compared using least significant difference (LSD) test method. The results showed that the 62 sampling sizes of triangle arrangement for soil available K were sufficient to reach the required accuracy. The triangular sample combination proved to be superior to a rectangular one of similar sample size.展开更多
Environmental DNA(eDNA)integrated with metabarcoding is a promising and powerful tool for species composition and biodiversity assessment in aquatic ecosystems and is increasingly applied to evaluate fish diversity.To...Environmental DNA(eDNA)integrated with metabarcoding is a promising and powerful tool for species composition and biodiversity assessment in aquatic ecosystems and is increasingly applied to evaluate fish diversity.To date,however,no standardized eDNA-based protocol has been established to monitor fish diversity.In this study,we investigated and compared two filtration methods and three DNA extraction methods using three filtration water volumes to determine a suitable approach for eDNA-based fish diversity monitoring in the Pearl River Estuary(PRE),a highly anthropogenically disturbed estuarine ecosystem.Compared to filtration-based precipitation,direct filtration was a more suitable method for eDNA metabarcoding in the PRE.The combined use of DNeasy Blood and Tissue Kit(BT)and traditional phenol/chloroform(PC)extraction produced higher DNA yields,amplicon sequence variants(ASVs),and Shannon diversity indices,and generated more homogeneous and consistent community composition among replicates.Compared to the other combined protocols,the PC and BT methods obtained better species detection,higher fish diversity,and greater consistency for the filtration water volumes of 1000 and 2000 mL,respectively.All eDNA metabarcoding protocols were more sensitive than bottom trawling in the PRE fish surveys and combining two techniques yielded greater taxonomic diversity.Furthermore,combining traditional methods with eDNA analysis enhanced accuracy.These results indicate that methodological decisions related to eDNA metabarcoding should be made with caution for fish community monitoring in estuarine ecosystems.展开更多
Background:Depending on tree and site characteristics crown biomass accounts for a significant portion of the total aboveground biomass in the tree.Crown biomass estimation is useful for different purposes including ...Background:Depending on tree and site characteristics crown biomass accounts for a significant portion of the total aboveground biomass in the tree.Crown biomass estimation is useful for different purposes including evaluating the economic feasibility of crown utilization for energy production or forest products,fuel load assessments and fire management strategies,and wildfire modeling.However,crown biomass is difficult to predict because of the variability within and among species and sites.Thus the allometric equations used for predicting crown biomass should be based on data collected with precise and unbiased sampling strategies.In this study,we evaluate the performance different sampling strategies to estimate crown biomass and to evaluate the effect of sample size in estimating crown biomass.Methods:Using data collected from 20 destructively sampled trees,we evaluated 11 different sampling strategies using six evaluation statistics:bias,relative bias,root mean square error(RMSE),relative RMSE,amount of biomass sampled,and relative biomass sampled.We also evaluated the performance of the selected sampling strategies when different numbers of branches(3,6,9,and 12)are selected from each tree.Tree specific log linear model with branch diameter and branch length as covariates was used to obtain individual branch biomass.Results:Compared to all other methods stratified sampling with probability proportional to size estimation technique produced better results when three or six branches per tree were sampled.However,the systematic sampling with ratio estimation technique was the best when at least nine branches per tree were sampled.Under the stratified sampling strategy,selecting unequal number of branches per stratum produced approximately similar results to simple random sampling,but it further decreased RMSE when information on branch diameter is used in the design and estimation phases.Conclusions:Use of auxiliary information in design or estimation phase reduces the RMSE produced by a sampling strategy.However,this is attained by having to sample larger amount of biomass.Based on our finding we would recommend sampling nine branches per tree to be reasonably efficient and limit the amount of fieldwork.展开更多
As the market competition among enterprises grows intensively and the demand for high quality products increases rapidly, product quality inspection and control has become one of the most important issues of manufactu...As the market competition among enterprises grows intensively and the demand for high quality products increases rapidly, product quality inspection and control has become one of the most important issues of manufacturing, and improving the efficiency and accuracy of inspection is also one of problems which enterprises must solve. It is particularly important to establish rational inspection planning for parts before inspecting product quality correctly. The traditional inspection methods have been difficult to satisfy the requirements on the speed and accuracy of modern manufacturing, so CAD-based computer-aided inspection planning (CAIP) system with the coordinate measuring machines (CMM) came into being. In this paper, an algorithm for adaptive sampling and collision-free inspection path generation is proposed, aiming at the CAD model-based inspection planning for coordinate measuring machines (CMM). Firstly, using the method of step adaptive subdivision and iteration , the sampling points for the specified number with even distribution will be generated automatically. Then, it generates the initial path by planning the inspection sequence of measurement points according to the values of each point's weight sum of parameters, and detects collision by constructing section lines between the probe swept-volume surfaces and the part surfaces, with axis-aligned bounding box (AABB) filtering to improve the detection efficiency. For collided path segments, it implements collision avoidance firstly aiming at the possible outer-circle features, and then at other collisions, for which the obstacle-avoiding movements are planned with the heuristic rules, and combined with a designed expanded AABB to set the obstacle-avoiding points. The computer experimental results show that the presented algorithm can plan sampling points' locations with strong adaptability for different complexity of general surfaces, and generate efficient optimum path in a short time and avoid collision effectively.展开更多
The method for constructing core collection ofMalus sieversii based on molecular marker data was proposed. According to 128 SSR allele of 109 M. sieversii, an allele preferred sampling strategy was used to construct M...The method for constructing core collection ofMalus sieversii based on molecular marker data was proposed. According to 128 SSR allele of 109 M. sieversii, an allele preferred sampling strategy was used to construct M. sieversii core collection, using the UPGMA (unweighted pair-group average method) cluster method according to Nei & Li, SM, and Jaccard genetic distances, by stepwise clustering, and compared with the random sampling strategy. The number of lost allele and t-test of Nei's gene diversity and Shannon's information index were used to evaluate the representative core collections. The results showed that compared with the random sampling strategy, allele preferred sampling strategy could construct more representative core collections. SM, difference for construction of M. sieversii core collection. Jaccard, and Nei & Li genetic distances had no significant SRAP (sequence-related amplified polymorphism) data and morphological data showed that allele preferred sampling strategy was a good sampling strategy for constructing core collection of M. sieversii. Allele preferred sampling strategy combined with SM, Jaccard, and Nei & Li genetic distances using stepwise clustering was the suitable method for constructing M. sieversii core collection.展开更多
Machine learning(ML)algorithms are frequently used in landslide susceptibility modeling.Different data handling strategies may generate variations in landslide susceptibility modeling,even when using the same ML algor...Machine learning(ML)algorithms are frequently used in landslide susceptibility modeling.Different data handling strategies may generate variations in landslide susceptibility modeling,even when using the same ML algorithm.This research aims to compare the combinations of inventory data handling,cross validation(CV),and hyperparameter tuning strategies to generate landslide susceptibility maps.The results are expected to provide a general strategy for landslide susceptibility modeling using ML techniques.The authors employed eight landslide inventory data handling scenarios to convert a landslide polygon into a landslide point,i.e.,the landslide point is located on the toe(minimum height),on the scarp(maximum height),at the center of the landslide,randomly inside the polygon(1 point),randomly inside the polygon(3 points),randomly inside the polygon(5 points),randomly inside the polygon(10 points),and 15 m grid sampling.Random forest models using CV-nonspatial hyperparameter tuning,spatial CV-spatial hyperparameter tuning,and spatial CV-forward feature selection-no hyperparameter tuning were applied for each data handling strategy.The combination generated 24 random forest ML workflows,which are applied using a complete inventory of 743 landslides triggered by Tropical Cyclone Cempaka(2017)in Pacitan Regency,Indonesia,and 11 landslide controlling factors.The results show that grid sampling with spatial CV and spatial hyperparameter tuning is favorable because the strategy can minimize overfitting,generate a relatively high-performance predictive model,and reduce the appearance of susceptibility artifacts in the landslide area.Careful data inventory handling,CV,and hyperparameter tuning strategies should be considered in landslide susceptibility modeling to increase the applicability of landslide susceptibility maps in practical application.展开更多
Background Mycophenolic acid (MPA) as an anti-proliferative immune-suppressive agent is used in the majority of immunosuppressive regimens in solid organ transplantation. This study aimed to investigate the pharmaco...Background Mycophenolic acid (MPA) as an anti-proliferative immune-suppressive agent is used in the majority of immunosuppressive regimens in solid organ transplantation. This study aimed to investigate the pharmacokinetic (PK) characteristics of enteric-coated mycophenolate sodium (EC-MPS) and area under the curve (AUC) from 0 to 12 hours with limited sampling strategies (LSSs) in Chinese renal transplant recipients. Methods This study was conducted in 10 Chinese renal transplant patients receiving living donor and treated with EC-MPS, cyclosporine, and corticosteroids. MPA concentrations were measured by enzyme multiplied immunoassay technique (EMIT). Whole 12-hour PK profiles were obtained on Day 4 after operation. LSSs with jackknife technique, multiple stepwise regression analysis, and Bland-Altman analysis were developed to estimate MPAAUC. Results The mean maximum plasma concentration, the mean time for it to reach peak (Tmax), and the mean MPA AUC were (11.38±2.49) mg/L, (4.85±3.32) hours, and (63.19±13.54) mg.h.L1, respectively. Among the 10 profiles, MPA AUC of four patients was significantly higher than that of the other six patients, and the corresponding Tmax was significantly longer than that of the other six patients. No patient exhibited a second peak caused by enterohepatic recirculation. The best models were as follows: 27.46+0.94C3+3.24C8+2.81C10 (f2=0.972), which was used to predict AUC of fast metabolizer with a mean prediction error (MPE) of -0.21% and a mean absolute prediction error (MAE) of 2.59%; 36.65+3.08Ce+5.30C10-4.04C12 (r2=0.992), which was used to predict AUC of slow metabolizer with a MPE of 0.58% and a MAE of 1.95%. Conclusions The PKs of EC-MPS had a high variability among Chinese renal transplant recipients. The preliminary PK data indicated the existence of slow and fast metabolizer. These findings may be associated with the enterohepatic rec.irculation.展开更多
A new heuristic approach was undertaken for the establishment of a core set for the diversity research of rice. As a result, 107 entries were selected from the 10 368 characterized accessions. The core set derived usi...A new heuristic approach was undertaken for the establishment of a core set for the diversity research of rice. As a result, 107 entries were selected from the 10 368 characterized accessions. The core set derived using this new approach provided a good representation of the characterized accessions present in the entire collection. No significant differences for the mean, range, standard deviation and coefficient of variation of each trait were observed between the core and existing collections. We also compared the diversity of core sets established using this Heuristic Core Collection (HCC) approach with those of core sets established using the conventional clustering methods. This modified heuristic algorithm can also be used to select genotype data with allelic richness and reduced redundancy, and to facilitate management and use of large collections of plant genetic resources in a more efficient way.展开更多
Vitex rotundifolia L. is an important plant species used in traditional Chinese medicine. For its efficient use and conservation, genetic diversity and clonal variation of V. rotundifolia populations in China were inv...Vitex rotundifolia L. is an important plant species used in traditional Chinese medicine. For its efficient use and conservation, genetic diversity and clonal variation of V. rotundifolia populations in China were investigated using inter-simple sequence repeat markers. Fourteen natural populations were included to estimate genetic diversity, and a large population with 135 individuals was used to analyze clonal variation and fine-scale spatial genetic structure. The overall genetic diversity (GD) of V. rotundifolia populations in China was moderate (GD=0.190), with about 40% within-population variation. Across all populations surveyed, the average within-population diversity was moderate (P = 22.6%; GD = 0.086). A relatively high genetic differentiation (Gst = 0.587) among populations was detected based on the analysis of molecular variance data. Such characteristics of V. rotundifofia are likely attributed to its sexual/asexual reproduction and limited gene flow. The genotypic diversity (D = 0.992) was greater than the average values of a clonal plant, indicating its significant reproduction through seedlings. Spatial autocorrelation analysis showed a clear within-population structure with gene clusters of approximately 20 m. Genetic diversity patterns of V. rotundifolia in China provide a useful guide for its efficient use and conservation by selecting particular populations displaying greater variation that may contain required medicinal compounds, and by sampling individuals in a population at 〉20 m spatial intervals to avoid collecting individuals with identical or similar genotypes.展开更多
For more than two decades rudimentary versions of the fixed sample and sequential search strategies have provided the primary theoretical foundation for the study of mate choice decisions by searchers. The theory that...For more than two decades rudimentary versions of the fixed sample and sequential search strategies have provided the primary theoretical foundation for the study of mate choice decisions by searchers. The theory that surrounds these models has expanded markedly over this time period. In this paper, we review and extend results derived from these models, with a focus on the empirical analysis of searcher behavior. The basic models are impractical for empirical purposes because they rely on the as- sumption that searchers--and, for applied purposes, researchers--assess prospective mates based on their quality, the fitness consequences of mate choice decisions. Here we expound versions of the models that are more empirically useful, reformulated to reflect decisions based on male phenotypic characters. For some organisms, it may be possible to use preference functions to de- rive predictions from the reformulated models and thereby avoid difficulties associated with the measurement of male quality per se. But predictions derived from the two models are difficult to differentiate empirically, regardless of how the models are formu- lated. Here we develop ideas that illustrate how this goal might be accomplished. In addition, we clarify how the variability of male quality should be evaluated and we extend what is known about how this variability influences searcher behavior under each model. More general difficulties associated with the empirical study of mate choice decisions by searchers are also discussed [Current Zoology 59 (2): 184-199, 2013].展开更多
文摘In order to determine an appropriate sampling strategy for the effective conservation of wild soybean (Glycine soja Sieb. et Zucc.) in China, a natural population from Jiangwan Airport in Shanghai was studied for its genetic diversity through the inter-simple sequence repeat (ISSR) marker analysis of a sample set consisting of 100 randomly collected individuals. A relatively large genetic diversity was detected among the samples based on estimation of DNA products amplified from 15 selected ISSR primers, with the similarity coefficient varying from 0.17 to 0.89. The mean expected heterozygosity (He) was 0.171 4 per locus, and Shannon index (1) was 0.271 4. The Principal Coordinate Analysis (PCA) further indicated that genetic diversity of the Jiangwan wild soybean population was not evenly distributed, instead, was presented by a mosaic or clustered distribution pattern. Correlation study between genetic diversity and number of samples demonstrated that genetic diversity increased dramatically with the increase of number of samples within 40 individuals, but the increase became slow and rapidly reached a plateau when more than 40 individuals were included in the analysis. It is concluded that (i) a sample set of approximately 35-45 individuals should be included to represent possibly high genetic diversity when conservation of a wild soybean population ex situ is undertaken; and (ii) collection of wild soybean samples should be spread out as wide as possible within a population, and a certain distance should be kept as intervals among individuals for sampling.
基金supported by the Fundamental Research Funds for the Central Universities(Grant No.2021ZY46)the Second Tibetan Plateau Scientifc Expedition and Research Program(STEP,Grant No.2019QZKK0906)Wentao Yang is grateful for the scholarship from the China Scholarships Council(No.202006515016)。
文摘Regional modeling of landslide hazards is an essential tool for the assessment and management of risk in mountain environments.Previous studies that have focused on modeling earthquake-triggered landslides report high prediction accuracies.However,it is common to use a validation strategy with an equal number of landslide and non-landslide samples,scattered homogeneously across the study area.Consequently,there are overestimations in the epicenter area,and the spatial pattern of modeled locations does not agree well with real events.In order to improve landslide hazard mapping,we proposed a spatially heterogeneous non-landslide sampling strategy by considering local ratios of landslide to non-landslide area.Coseismic landslides triggered by the 2008 Wenchuan Earthquake on the eastern Tibetan Plateau were used as an example.To assess the performance of the new strategy,we trained two random forest models that shared the same hyperparameters.The frst was trained using samples from the new heterogeneous strategy,and the second used the traditional approach.In each case the spatial match between modeled and measured(interpreted)landslides was examined by scatterplot,with a 2 km-by-2 km fshnet.Although the traditional approach achieved higher AUC_(ROC)(0.95)accuracy than the proposed one(0.85),the coefcient of determination(R^(2))for the new strategy(0.88)was much higher than for the traditional strategy(0.55).Our results indicate that the proposed strategy outperforms the traditional one when comparing against landslide inventory data.Our work demonstrates that higher prediction accuracies in landslide hazard modeling may be deceptive,and validation of the modeled spatial pattern should be prioritized.The proposed method may also be used to improve the mapping of precipitation-induced landslides.Application of the proposed strategy could beneft precise assessment of landslide risks in mountain environments.
基金the National Natural Science Foundation of China(Grant Nos.52275104,51905160)the Natural Science Fund for Excellent Young Scholars of Hunan Province(Grant No.2021JJ20017)。
文摘Existing unsupervised domain adaptation approaches primarily focus on reducing the data distribution gap between the source and target domains,often neglecting the influence of class information,leading to inaccurate alignment outcomes.Guided by this observation,this paper proposes an adaptive inter-intra-domain discrepancy method to quantify the intra-class and inter-class discrepancies between the source and target domains.Furthermore,an adaptive factor is introduced to dynamically assess their relative importance.Building upon the proposed adaptive inter-intradomain discrepancy approach,we develop an inter-intradomain alignment network with a class-aware sampling strategy(IDAN-CSS)to distill the feature representations.The classaware sampling strategy,integrated within IDAN-CSS,facilitates more efficient training.Through multiple transfer diagnosis cases,we comprehensively demonstrate the feasibility and effectiveness of the proposed IDAN-CSS model.
文摘The aim of this study is to investigate the impacts of the sampling strategy of landslide and non-landslide on the performance of landslide susceptibility assessment(LSA).The study area is the Feiyun catchment in Wenzhou City,Southeast China.Two types of landslides samples,combined with seven non-landslide sampling strategies,resulted in a total of 14 scenarios.The corresponding landslide susceptibility map(LSM)for each scenario was generated using the random forest model.The receiver operating characteristic(ROC)curve and statistical indicators were calculated and used to assess the impact of the dataset sampling strategy.The results showed that higher accuracies were achieved when using the landslide core as positive samples,combined with non-landslide sampling from the very low zone or buffer zone.The results reveal the influence of landslide and non-landslide sampling strategies on the accuracy of LSA,which provides a reference for subsequent researchers aiming to obtain a more reasonable LSM.
基金This work was supported by the National Basic Research Program of China(No.2006CB403305).
文摘A total of 892 individuals sampled from a wild soybean population in a natural reserve near the Yellow River estuary located in Kenli of Shandong Province(China)were investigated.Seventeen SSR(simple sequence repeat)primer pairs from cultivated soybeans were used to estimate the genetic diversity of the population and its variation pattern versus changes of the sample size(sub-samples),in addition to investigating the fine-scale spatial genetic structure within the population.The results showed relatively high genetic diversity of the population with the mean value of allele number(A)being 2.88,expected heterozygosity(He)0.431,Shannon diversity index(I)0.699,and percentage of poly-morphic loci(P)100%.Sub-samples of different sizes(ten groups)were randomly drawn from the population and their genetic diversity was calculated by computer simulation.The regression model of the four diversity indexes with the change of sample sizes was computed.As a result,27-52 individuals can reach 95%of total genetic variability of the population.Spatial autocorrelation analysis revealed that the genetic patch size of this wild soybean population is about 18 m.The study provided a scientific basis for the sampling strategy of wild soybean populations.
基金supported in part by the National Natural Science Foundation of China under Grant Nos.U20A20197,62306187the Foundation of Ministry of Industry and Information Technology TC220H05X-04.
文摘In recent years,semantic segmentation on 3D point cloud data has attracted much attention.Unlike 2D images where pixels distribute regularly in the image domain,3D point clouds in non-Euclidean space are irregular and inherently sparse.Therefore,it is very difficult to extract long-range contexts and effectively aggregate local features for semantic segmentation in 3D point cloud space.Most current methods either focus on local feature aggregation or long-range context dependency,but fail to directly establish a global-local feature extractor to complete the point cloud semantic segmentation tasks.In this paper,we propose a Transformer-based stratified graph convolutional network(SGT-Net),which enlarges the effective receptive field and builds direct long-range dependency.Specifically,we first propose a novel dense-sparse sampling strategy that provides dense local vertices and sparse long-distance vertices for subsequent graph convolutional network(GCN).Secondly,we propose a multi-key self-attention mechanism based on the Transformer to further weight augmentation for crucial neighboring relationships and enlarge the effective receptive field.In addition,to further improve the efficiency of the network,we propose a similarity measurement module to determine whether the neighborhood near the center point is effective.We demonstrate the validity and superiority of our method on the S3DIS and ShapeNet datasets.Through ablation experiments and segmentation visualization,we verify that the SGT model can improve the performance of the point cloud semantic segmentation.
基金National Natural Science Foundation of China(Grant No. 30472165) the 985 Projects of the State KeyLaboratory of Natural and Biomimetic Drugs (Grant No.268705077280).
文摘Aim To develop a method to estimate population pharmacokinetic parameters with the limited sampling time points provided clinically during therapeutic drug monitoring. Methods Various simulations were attempted using a one-compartment open model with the first order absorption to determine PK parameter estimates with different sampling strategies as a validation of the method. The estimated parameters were further verified by comparing to the observed values. Results The samples collected at the single time point close to the non-informative sampling time point designed by this method led to bias and inaccurate parameter estimations. Furthermore, the relationship between the estimated non-informative sampling time points and the values of the parameter was examined. The non-informative sampling time points have been developed under some typical occasions and the results were plotted to show the tendency. As a result, one non-informative time point was demonstrated to be appropriate for clearance and two for both volume of distribution and constant of absorption in the present study. It was found that the estimates of the non-informative sampling time points developed in the method increase with increases of volume of distribution and the decrease of clearance and constant of absorption. Conclusion A rational sampling strategy during therapeutic drug monitoring can be established using the method present in the study.
基金This work was supported by National Natural Science Foundation of China(Grant no.41972267 and no.41572257)Graduate Innovation Fund of Jilin University(Grant no.101832020CX232)。
文摘The quality of debris flow susceptibility mapping varies with sampling strategies. This paper aims at comparing three sampling strategies and determining the optimal one to sample the debris flow watersheds. The three sampling strategies studied were the centroid of the scarp area(COSA), the centroid of the flowing area(COFA), and the centroid of the accumulation area(COAA) of debris flow watersheds. An inventory consisting of 150 debris flow watersheds and 12 conditioning factors were prepared for research. Firstly, the information gain ratio(IGR) method was used to analyze the predictive ability of the conditioning factors. Subsequently, 12 conditioning factors were involved in the modeling of artificial neural network(ANN), random forest(RF) and support vector machine(SVM). Then, the receiver operating characteristic curves(ROC) and the area under curves(AUC) were used to evaluate the model performance. Finally, a scoring system was used to score the quality of the debris flow susceptibility maps. Samples obtained from the accumulation area have the strongest predictive ability and can make the models achieve the best performance. The AUC values corresponding to the best model performance on the validation dataset were 0.861, 0.804 and 0.856 for SVM, ANN and RF respectively. The sampling strategy of the centroid of the scarp area is optimal with the highest quality of debris flow susceptibility maps having scores of 373470, 393241 and 362485 for SVM, ANN and RF respectively.
基金Project (No. 39670262) supported by the National Natural Science Foundation of Chinathe International Scholar Exchange Fellowship Program (2000) of the Korea Foundation For Advanced Studies
文摘This study was aimed at investigating the sampling strategies for 2 types of figures: 3-D cubes and human faces. The research was focused on: (a) from where the sampling process started; (b) in what order the figures' features were sampled. The study consisted of 2 experiments: (a) sampling strategies for 3-D cubes; (b) sampling strategies for human faces. The results showed that: (a), for 3-D cubes, the first sampling was mostly located at the outline parts, rarely at the center part; while for human faces, the first sampling was mostly located at the hair and outline parts, rarely at the mouth or cheek parts, in most cases, the first sampling-position had no significant effects on cognitive performance and that (b), the sampling order, both for 3-D cubes and for human faces, was determined by the degree of difference among the sampled-features.
基金Project supported by the British Council !(No. SHA/ 992/ 297) the Natural Science Foundation of Zhejiang Province, China! (N
文摘Field nutrient distribution maps obtained from the study on soil variations within fields are the basis of precision agriculture. The quality of these maps for management depends on the accuracy of the predicted values, which depends on the initial sampling. To produce reliable predictions efficiently the minimal sampling size and combination should be decided firstly, which could avoid the misspent funds for field sampling work. A 7.9 hectare silage field close to the Agricultural Research institute at Hillsborough, Northern Ireland, was selected for the study. Soil samples were collected from the field at 25 m intervals in a rectangular grid to provide a database of selected soil properties. Different data combinations were subsequently abstracted from this database for comparison purposes, and ordinary kriging used to produce interpolated soil maps. These predicted data groups were compared using least significant difference (LSD) test method. The results showed that the 62 sampling sizes of triangle arrangement for soil available K were sufficient to reach the required accuracy. The triangular sample combination proved to be superior to a rectangular one of similar sample size.
基金supported by the National Natural Science Foundation of China(32102793)National Key R&D Program of China(2018YFD0900802)+4 种基金Central Public-Interest Scientific Institution Basal Research FundSouth China Sea Fisheries Research Institute,CAFS(2019TS13,2021SD18)Key Special Project for Introduced Talents Team of Southern Marine Science and Engineering Guangdong Laboratory(Guangzhou)(GML2019ZD0605)Open Fund Project of Key Laboratory of Offshore Fishery Development of Ministry of Agriculture and Rural Affairs(LOF 2020-02)China-ASEAN Maritime Cooperation Fund(CAMC-2018F)。
文摘Environmental DNA(eDNA)integrated with metabarcoding is a promising and powerful tool for species composition and biodiversity assessment in aquatic ecosystems and is increasingly applied to evaluate fish diversity.To date,however,no standardized eDNA-based protocol has been established to monitor fish diversity.In this study,we investigated and compared two filtration methods and three DNA extraction methods using three filtration water volumes to determine a suitable approach for eDNA-based fish diversity monitoring in the Pearl River Estuary(PRE),a highly anthropogenically disturbed estuarine ecosystem.Compared to filtration-based precipitation,direct filtration was a more suitable method for eDNA metabarcoding in the PRE.The combined use of DNeasy Blood and Tissue Kit(BT)and traditional phenol/chloroform(PC)extraction produced higher DNA yields,amplicon sequence variants(ASVs),and Shannon diversity indices,and generated more homogeneous and consistent community composition among replicates.Compared to the other combined protocols,the PC and BT methods obtained better species detection,higher fish diversity,and greater consistency for the filtration water volumes of 1000 and 2000 mL,respectively.All eDNA metabarcoding protocols were more sensitive than bottom trawling in the PRE fish surveys and combining two techniques yielded greater taxonomic diversity.Furthermore,combining traditional methods with eDNA analysis enhanced accuracy.These results indicate that methodological decisions related to eDNA metabarcoding should be made with caution for fish community monitoring in estuarine ecosystems.
基金the Forest Inventory Analysis Unit for funding the data collection and analysis phases of this project
文摘Background:Depending on tree and site characteristics crown biomass accounts for a significant portion of the total aboveground biomass in the tree.Crown biomass estimation is useful for different purposes including evaluating the economic feasibility of crown utilization for energy production or forest products,fuel load assessments and fire management strategies,and wildfire modeling.However,crown biomass is difficult to predict because of the variability within and among species and sites.Thus the allometric equations used for predicting crown biomass should be based on data collected with precise and unbiased sampling strategies.In this study,we evaluate the performance different sampling strategies to estimate crown biomass and to evaluate the effect of sample size in estimating crown biomass.Methods:Using data collected from 20 destructively sampled trees,we evaluated 11 different sampling strategies using six evaluation statistics:bias,relative bias,root mean square error(RMSE),relative RMSE,amount of biomass sampled,and relative biomass sampled.We also evaluated the performance of the selected sampling strategies when different numbers of branches(3,6,9,and 12)are selected from each tree.Tree specific log linear model with branch diameter and branch length as covariates was used to obtain individual branch biomass.Results:Compared to all other methods stratified sampling with probability proportional to size estimation technique produced better results when three or six branches per tree were sampled.However,the systematic sampling with ratio estimation technique was the best when at least nine branches per tree were sampled.Under the stratified sampling strategy,selecting unequal number of branches per stratum produced approximately similar results to simple random sampling,but it further decreased RMSE when information on branch diameter is used in the design and estimation phases.Conclusions:Use of auxiliary information in design or estimation phase reduces the RMSE produced by a sampling strategy.However,this is attained by having to sample larger amount of biomass.Based on our finding we would recommend sampling nine branches per tree to be reasonably efficient and limit the amount of fieldwork.
基金Tsupported by Innovation Fund of Ministry of Science andTechnology of China for Small Technology-Based Firms (Grant No.04C26223400148)
文摘As the market competition among enterprises grows intensively and the demand for high quality products increases rapidly, product quality inspection and control has become one of the most important issues of manufacturing, and improving the efficiency and accuracy of inspection is also one of problems which enterprises must solve. It is particularly important to establish rational inspection planning for parts before inspecting product quality correctly. The traditional inspection methods have been difficult to satisfy the requirements on the speed and accuracy of modern manufacturing, so CAD-based computer-aided inspection planning (CAIP) system with the coordinate measuring machines (CMM) came into being. In this paper, an algorithm for adaptive sampling and collision-free inspection path generation is proposed, aiming at the CAD model-based inspection planning for coordinate measuring machines (CMM). Firstly, using the method of step adaptive subdivision and iteration , the sampling points for the specified number with even distribution will be generated automatically. Then, it generates the initial path by planning the inspection sequence of measurement points according to the values of each point's weight sum of parameters, and detects collision by constructing section lines between the probe swept-volume surfaces and the part surfaces, with axis-aligned bounding box (AABB) filtering to improve the detection efficiency. For collided path segments, it implements collision avoidance firstly aiming at the possible outer-circle features, and then at other collisions, for which the obstacle-avoiding movements are planned with the heuristic rules, and combined with a designed expanded AABB to set the obstacle-avoiding points. The computer experimental results show that the presented algorithm can plan sampling points' locations with strong adaptability for different complexity of general surfaces, and generate efficient optimum path in a short time and avoid collision effectively.
基金financially supported by the National Natural Science Foundation of China (30871679)National 863 Program of China (2006AA100108)Agricultural Improved Variety Project of Shandong Province, China.
文摘The method for constructing core collection ofMalus sieversii based on molecular marker data was proposed. According to 128 SSR allele of 109 M. sieversii, an allele preferred sampling strategy was used to construct M. sieversii core collection, using the UPGMA (unweighted pair-group average method) cluster method according to Nei & Li, SM, and Jaccard genetic distances, by stepwise clustering, and compared with the random sampling strategy. The number of lost allele and t-test of Nei's gene diversity and Shannon's information index were used to evaluate the representative core collections. The results showed that compared with the random sampling strategy, allele preferred sampling strategy could construct more representative core collections. SM, difference for construction of M. sieversii core collection. Jaccard, and Nei & Li genetic distances had no significant SRAP (sequence-related amplified polymorphism) data and morphological data showed that allele preferred sampling strategy was a good sampling strategy for constructing core collection of M. sieversii. Allele preferred sampling strategy combined with SM, Jaccard, and Nei & Li genetic distances using stepwise clustering was the suitable method for constructing M. sieversii core collection.
文摘Machine learning(ML)algorithms are frequently used in landslide susceptibility modeling.Different data handling strategies may generate variations in landslide susceptibility modeling,even when using the same ML algorithm.This research aims to compare the combinations of inventory data handling,cross validation(CV),and hyperparameter tuning strategies to generate landslide susceptibility maps.The results are expected to provide a general strategy for landslide susceptibility modeling using ML techniques.The authors employed eight landslide inventory data handling scenarios to convert a landslide polygon into a landslide point,i.e.,the landslide point is located on the toe(minimum height),on the scarp(maximum height),at the center of the landslide,randomly inside the polygon(1 point),randomly inside the polygon(3 points),randomly inside the polygon(5 points),randomly inside the polygon(10 points),and 15 m grid sampling.Random forest models using CV-nonspatial hyperparameter tuning,spatial CV-spatial hyperparameter tuning,and spatial CV-forward feature selection-no hyperparameter tuning were applied for each data handling strategy.The combination generated 24 random forest ML workflows,which are applied using a complete inventory of 743 landslides triggered by Tropical Cyclone Cempaka(2017)in Pacitan Regency,Indonesia,and 11 landslide controlling factors.The results show that grid sampling with spatial CV and spatial hyperparameter tuning is favorable because the strategy can minimize overfitting,generate a relatively high-performance predictive model,and reduce the appearance of susceptibility artifacts in the landslide area.Careful data inventory handling,CV,and hyperparameter tuning strategies should be considered in landslide susceptibility modeling to increase the applicability of landslide susceptibility maps in practical application.
文摘Background Mycophenolic acid (MPA) as an anti-proliferative immune-suppressive agent is used in the majority of immunosuppressive regimens in solid organ transplantation. This study aimed to investigate the pharmacokinetic (PK) characteristics of enteric-coated mycophenolate sodium (EC-MPS) and area under the curve (AUC) from 0 to 12 hours with limited sampling strategies (LSSs) in Chinese renal transplant recipients. Methods This study was conducted in 10 Chinese renal transplant patients receiving living donor and treated with EC-MPS, cyclosporine, and corticosteroids. MPA concentrations were measured by enzyme multiplied immunoassay technique (EMIT). Whole 12-hour PK profiles were obtained on Day 4 after operation. LSSs with jackknife technique, multiple stepwise regression analysis, and Bland-Altman analysis were developed to estimate MPAAUC. Results The mean maximum plasma concentration, the mean time for it to reach peak (Tmax), and the mean MPA AUC were (11.38±2.49) mg/L, (4.85±3.32) hours, and (63.19±13.54) mg.h.L1, respectively. Among the 10 profiles, MPA AUC of four patients was significantly higher than that of the other six patients, and the corresponding Tmax was significantly longer than that of the other six patients. No patient exhibited a second peak caused by enterohepatic recirculation. The best models were as follows: 27.46+0.94C3+3.24C8+2.81C10 (f2=0.972), which was used to predict AUC of fast metabolizer with a mean prediction error (MPE) of -0.21% and a mean absolute prediction error (MAE) of 2.59%; 36.65+3.08Ce+5.30C10-4.04C12 (r2=0.992), which was used to predict AUC of slow metabolizer with a MPE of 0.58% and a MAE of 1.95%. Conclusions The PKs of EC-MPS had a high variability among Chinese renal transplant recipients. The preliminary PK data indicated the existence of slow and fast metabolizer. These findings may be associated with the enterohepatic rec.irculation.
基金Supported by the Bio-Green 21 program (20080401034058) of the Rural Development Administration (RDA), Koreaa grant (200803101010290)from National Academy of Agricultural Science, RDA, Korea
文摘A new heuristic approach was undertaken for the establishment of a core set for the diversity research of rice. As a result, 107 entries were selected from the 10 368 characterized accessions. The core set derived using this new approach provided a good representation of the characterized accessions present in the entire collection. No significant differences for the mean, range, standard deviation and coefficient of variation of each trait were observed between the core and existing collections. We also compared the diversity of core sets established using this Heuristic Core Collection (HCC) approach with those of core sets established using the conventional clustering methods. This modified heuristic algorithm can also be used to select genotype data with allelic richness and reduced redundancy, and to facilitate management and use of large collections of plant genetic resources in a more efficient way.
基金the Shanghai Modernization of TCM Foundation of China(04DZ19810)
文摘Vitex rotundifolia L. is an important plant species used in traditional Chinese medicine. For its efficient use and conservation, genetic diversity and clonal variation of V. rotundifolia populations in China were investigated using inter-simple sequence repeat markers. Fourteen natural populations were included to estimate genetic diversity, and a large population with 135 individuals was used to analyze clonal variation and fine-scale spatial genetic structure. The overall genetic diversity (GD) of V. rotundifolia populations in China was moderate (GD=0.190), with about 40% within-population variation. Across all populations surveyed, the average within-population diversity was moderate (P = 22.6%; GD = 0.086). A relatively high genetic differentiation (Gst = 0.587) among populations was detected based on the analysis of molecular variance data. Such characteristics of V. rotundifofia are likely attributed to its sexual/asexual reproduction and limited gene flow. The genotypic diversity (D = 0.992) was greater than the average values of a clonal plant, indicating its significant reproduction through seedlings. Spatial autocorrelation analysis showed a clear within-population structure with gene clusters of approximately 20 m. Genetic diversity patterns of V. rotundifolia in China provide a useful guide for its efficient use and conservation by selecting particular populations displaying greater variation that may contain required medicinal compounds, and by sampling individuals in a population at 〉20 m spatial intervals to avoid collecting individuals with identical or similar genotypes.
文摘For more than two decades rudimentary versions of the fixed sample and sequential search strategies have provided the primary theoretical foundation for the study of mate choice decisions by searchers. The theory that surrounds these models has expanded markedly over this time period. In this paper, we review and extend results derived from these models, with a focus on the empirical analysis of searcher behavior. The basic models are impractical for empirical purposes because they rely on the as- sumption that searchers--and, for applied purposes, researchers--assess prospective mates based on their quality, the fitness consequences of mate choice decisions. Here we expound versions of the models that are more empirically useful, reformulated to reflect decisions based on male phenotypic characters. For some organisms, it may be possible to use preference functions to de- rive predictions from the reformulated models and thereby avoid difficulties associated with the measurement of male quality per se. But predictions derived from the two models are difficult to differentiate empirically, regardless of how the models are formu- lated. Here we develop ideas that illustrate how this goal might be accomplished. In addition, we clarify how the variability of male quality should be evaluated and we extend what is known about how this variability influences searcher behavior under each model. More general difficulties associated with the empirical study of mate choice decisions by searchers are also discussed [Current Zoology 59 (2): 184-199, 2013].