Many fields,such as neuroscience,are experiencing the vast prolife ration of cellular data,underscoring the need fo r organizing and interpreting large datasets.A popular approach partitions data into manageable subse...Many fields,such as neuroscience,are experiencing the vast prolife ration of cellular data,underscoring the need fo r organizing and interpreting large datasets.A popular approach partitions data into manageable subsets via hierarchical clustering,but objective methods to determine the appropriate classification granularity are missing.We recently introduced a technique to systematically identify when to stop subdividing clusters based on the fundamental principle that cells must differ more between than within clusters.Here we present the corresponding protocol to classify cellular datasets by combining datadriven unsupervised hierarchical clustering with statistical testing.These general-purpose functions are applicable to any cellular dataset that can be organized as two-dimensional matrices of numerical values,including molecula r,physiological,and anatomical datasets.We demonstrate the protocol using cellular data from the Janelia MouseLight project to chara cterize morphological aspects of neurons.展开更多
Test data compression and test resource partitioning (TRP) are essential to reduce the amount of test data in system-on-chip testing. A novel variable-to-variable-length compression codes is designed as advanced fre...Test data compression and test resource partitioning (TRP) are essential to reduce the amount of test data in system-on-chip testing. A novel variable-to-variable-length compression codes is designed as advanced fre- quency-directed run-length (AFDR) codes. Different [rom frequency-directed run-length (FDR) codes, AFDR encodes both 0- and 1-runs and uses the same codes to the equal length runs. It also modifies the codes for 00 and 11 to improve the compression performance. Experimental results for ISCAS 89 benchmark circuits show that AFDR codes achieve higher compression ratio than FDR and other compression codes.展开更多
This paper presents a new test data compression/decompression method for SoC testing,called hybrid run length codes. The method makes a full analysis of the factors which influence test parameters:compression ratio,t...This paper presents a new test data compression/decompression method for SoC testing,called hybrid run length codes. The method makes a full analysis of the factors which influence test parameters:compression ratio,test application time, and area overhead. To improve the compression ratio, the new method is based on variable-to-variable run length codes,and a novel algorithm is proposed to reorder the test vectors and fill the unspecified bits in the pre-processing step. With a novel on-chip decoder, low test application time and low area overhead are obtained by hybrid run length codes. Finally, an experimental comparison on ISCAS 89 benchmark circuits validates the proposed method展开更多
Data obtained from accelerated life testing (ALT) when there are two or more failure modes, which is commonly referred to as competing failure modes, are often incomplete. The incompleteness is mainly due to censori...Data obtained from accelerated life testing (ALT) when there are two or more failure modes, which is commonly referred to as competing failure modes, are often incomplete. The incompleteness is mainly due to censoring, as well as masking which might be the case that the failure time is observed, but its corresponding failure mode is not identified. Because the identification of the failure mode may be expensive, or very difficult to investigate due to lack of appropriate diagnostics. A method is proposed for analyzing incomplete data of constant stress ALT with competing failure modes. It is assumed that failure modes have s-independent latent lifetimes and the log lifetime of each failure mode can be written as a linear function of stress. The parameters of the model are estimated by using the expectation maximum (EM) algorithm with incomplete data. Simulation studies are performed to check'model validity and investigate the properties of estimates. For further validation, the method is also illustrated by an example, which shows the process of analyze incomplete data from ALT of some insulation system. Because of considering the incompleteness of data in modeling and making use of the EM algorithm in estimating, the method becomes more flexible in ALT analysis.展开更多
We developed an inversion technique to determine in situ stresses for elliptical boreholes of arbitrary trajectory. In this approach, borehole geometry, drilling-induced fracture information, and other available leak-...We developed an inversion technique to determine in situ stresses for elliptical boreholes of arbitrary trajectory. In this approach, borehole geometry, drilling-induced fracture information, and other available leak-off test data were used to construct a mathematical model, which was in turn applied to finding the inverse of an overdetermined system of equations.The method has been demonstrated by a case study in the Appalachian Basin, USA. The calculated horizontal stresses are in reasonable agreement with the reported regional stress study of the area, although there are no field measurement data of the studied well for direct calibration. The results also indicate that 2% of axis difference in the elliptical borehole geometry can cause a 5% difference in minimum horizontal stress calculation and a 10% difference in maximum horizontal stress calculation.展开更多
A new structural damage identification method using limited test static displacement based on grey system theory is proposed in this paper. The grey relation coefficient of displacement curvature is defined and used t...A new structural damage identification method using limited test static displacement based on grey system theory is proposed in this paper. The grey relation coefficient of displacement curvature is defined and used to locate damage in the structure, and an iterative estimation scheme for solving nonlinear optimization programming problems based on the quadratic programming technique is used to identify the damage magnitude. A numerical example of a cantilever beam with single or multiple damages is used to examine the capability of the proposed grey-theory-based method to localize and identify damages. The factors of meas-urement noise and incomplete test data are also discussed. The numerical results showed that the damage in the structure can be localized correctly through using the grey-related coefficient of displacement curvature, and the damage magnitude can be iden-tified with a high degree of accuracy, regardless of the number of measured displacement nodes. This proposed method only requires limited static test data, which is easily available in practice, and has wide applications in structural damage detection.展开更多
In this paper, it is discussed that two tests for varying dispersion of binomial data in the framework of nonlinear logistic models with random effects, which are widely used in analyzing longitudinal binomial data. O...In this paper, it is discussed that two tests for varying dispersion of binomial data in the framework of nonlinear logistic models with random effects, which are widely used in analyzing longitudinal binomial data. One is the individual test and power calculation for varying dispersion through testing the randomness of cluster effects, which is extensions of Dean(1992) and Commenges et al (1994). The second test is the composite test for varying dispersion through simultaneously testing the randomness of cluster effects and the equality of random-effect means. The score test statistics are constructed and expressed in simple, easy to use, matrix formulas. The authors illustrate their test methods using the insecticide data (Giltinan, Capizzi & Malani (1988)).展开更多
Under Type-Ⅱ progressively hybrid censoring, this paper discusses statistical inference and optimal design on stepstress partially accelerated life test for hybrid system in presence of masked data. It is assumed tha...Under Type-Ⅱ progressively hybrid censoring, this paper discusses statistical inference and optimal design on stepstress partially accelerated life test for hybrid system in presence of masked data. It is assumed that the lifetime of the component in hybrid systems follows independent and identical modified Weibull distributions. The maximum likelihood estimations(MLEs)of the unknown parameters, acceleration factor and reliability indexes are derived by using the Newton-Raphson algorithm. The asymptotic variance-covariance matrix and the approximate confidence intervals are obtained based on normal approximation to the asymptotic distribution of MLEs of model parameters. Moreover,two bootstrap confidence intervals are constructed by using the parametric bootstrap method. The optimal time of changing stress levels is determined under D-optimality and A-optimality criteria.Finally, the Monte Carlo simulation study is carried out to illustrate the proposed procedures.展开更多
The question of how to choose a copula model that best fits a given dataset is a predominant limitation of the copula approach, and the present study aims to investigate the techniques of goodness-of-fit tests for mul...The question of how to choose a copula model that best fits a given dataset is a predominant limitation of the copula approach, and the present study aims to investigate the techniques of goodness-of-fit tests for multi-dimensional copulas. A goodness-of-fit test based on Rosenblatt's transformation was mathematically expanded from two dimensions to three dimensions and procedures of a bootstrap version of the test were provided. Through stochastic copula simulation, an empirical application of historical drought data at the Lintong Gauge Station shows that the goodness-of-fit tests perform well, revealing that both trivariate Gaussian and Student t copulas are acceptable for modeling the dependence structures of the observed drought duration, severity, and peak. The goodness-of-fit tests for multi-dimensional copulas can provide further support and help a lot in the potential applications of a wider range of copulas to describe the associations of correlated hydrological variables. However, for the application of copulas with the number of dimensions larger than three, more complicated computational efforts as well as exploration and parameterization of corresponding copulas are required.展开更多
The purpose of this study is to design a Moroccan Trail Making Test B;explore the effects of age, education and gender on the performance of the Trail Making Test (TMT);and provide normative information in Moroccan su...The purpose of this study is to design a Moroccan Trail Making Test B;explore the effects of age, education and gender on the performance of the Trail Making Test (TMT);and provide normative information in Moroccan subjects. Our normalization study was conducted on 348 subjects (156 female and 192 male). The subjects were classified into four groups based on age (18 - 39 years, 40 - 59 years, 60 - 69 years and ≥70 years), and three groups based on educational level (3 - 6 years, 7 - 10 years and ≥11 years). The data were analyzed using descriptive statistics through SPSS. The results displayed that increasing age and decreasing levels of education significantly result in a decreased performance on the Trail A, Moroccan Trail B and English Trail B. Only 229 bilingual subjects among the 348 initial subjects completed both versions of Trail B. There was no significant difference on performance between Moroccan Trail B and English Trail B for these subjects.展开更多
Objective:Saccades accompanied by normal gain in video head impulse tests(vHIT)are often observed in patients with vestibular migraine(VM).However,they are not considered as an independent indicator,reducing their uti...Objective:Saccades accompanied by normal gain in video head impulse tests(vHIT)are often observed in patients with vestibular migraine(VM).However,they are not considered as an independent indicator,reducing their utility in diagnosing VM.To better understand clinical features of VM,it is necessary to understand raw saccades data.Methods:Fourteen patients with confirmed VM,45 patients with probable VM(p-VM)and 14 agematched healthy volunteers were included in this study.Clinical findings related to spontaneous nystagmus(SN),positional nystagmus(PN),head-shaking nystagmus(HSN),caloric test and vHIT were recorded.Raw saccades data were exported and numbered by their sequences,and their features analyzed.Results:VM patients showed no SN,PN or HSN,and less than half of them showed unilateral weakness(UW)on caloric test.The first saccades from lateral semicircular canal stimulation were the most predominant for both left and right sides.Neither velocity nor time parameters were significantly different when compared between the two sides.Most VM patients(86%)exhibited small saccades,around 35%of the head peak velocity,with a latency of 200e400 ms.Characteristics of saccades were similar in patients with p-VM.Only four normal subjects showed saccades,all unilateral and seemingly random.Conclusions:Small saccades involving bilateral semicircular canals with a scattered distribution pattern are common in patients with VM and p-VM.展开更多
By analyzing some existing test data generation methods, a new automated test data generation approach was presented. The linear predicate functions on a given path was directly used to construct a linear constrain sy...By analyzing some existing test data generation methods, a new automated test data generation approach was presented. The linear predicate functions on a given path was directly used to construct a linear constrain system for input variables. Only when the predicate function is nonlinear, does the linear arithmetic representation need to be computed. If the entire predicate functions on the given path are linear, either the desired test data or the guarantee that the path is infeasible can be gotten from the solution of the constrain system. Otherwise, the iterative refining for the input is required to obtain the desired test data. Theoretical analysis and test results show that the approach is simple and effective, and takes less computation. The scheme can also be used to generate path-based test data for the programs with arrays and loops.展开更多
PL/SQL is the most common language for ORACLE database application. It allows the developer to create stored program units (Procedures, Functions, and Packages) to improve software reusability and hide the complexity ...PL/SQL is the most common language for ORACLE database application. It allows the developer to create stored program units (Procedures, Functions, and Packages) to improve software reusability and hide the complexity of the execution of a specific operation behind a name. Also, it acts as an interface between SQL database and DEVELOPER. Therefore, it is important to test these modules that consist of procedures and functions. In this paper, a new genetic algorithm (GA), as search technique, is used in order to find the required test data according to branch criteria to test stored PL/SQL program units. The experimental results show that this was not fully achieved, such that the test target in some branches is not reached and the coverage percentage is 98%. A problem rises when target branch is depending on data retrieved from tables;in this case, GA is not able to generate test cases for this branch.展开更多
A variety of faulty radar echoes may cause serious problems with radar data applications,especially radar data assimilation and quantitative precipitation estimates.In this study,"test pattern" caused by test signal...A variety of faulty radar echoes may cause serious problems with radar data applications,especially radar data assimilation and quantitative precipitation estimates.In this study,"test pattern" caused by test signal or radar hardware failures in CINRAD (China New Generation Weather Radar) SA and SB radar operational observations are investigated.In order to distinguish the test pattern from other types of radar echoes,such as precipitation,clear air and other non-meteorological echoes,five feature parameters including the effective reflectivity data percentage (Rz),velocity RF (range folding) data percentage (RRF),missing velocity data percentage (RM),averaged along-azimuth reflectivity fluctuation (RNr,z) and averaged along-beam reflectivity fluctuation (RNa,z) are proposed.Based on the fuzzy logic method,a test pattern identification algorithm is developed,and the statistical results from all the different kinds of radar echoes indicate the performance of the algorithm.Analysis of two typical cases with heavy precipitation echoes located inside the test pattern are performed.The statistical results show that the test pattern identification algorithm performs well,since the test pattern is recognized in most cases.Besides,the algorithm can effectively remove the test pattern signal and retain strong precipitation echoes in heavy rainfall events.展开更多
Many multi-story or highrise buildings consisting of a number of identical stories are usually considered as periodic spring-mass systems. The general expressions of natural frequencies, mode shapes, slopes and curvat...Many multi-story or highrise buildings consisting of a number of identical stories are usually considered as periodic spring-mass systems. The general expressions of natural frequencies, mode shapes, slopes and curvatures of mode shapes of the periodic spring-mass system by utilizing the periodic structure theory are derived in this paper. The sensitivities of these mode parameters with respect to structural damages, which do not depend on the physical parameters of the original structures, are obtained. Based on the sensitivity analysis of these mode parameters, a two-stage method is proposed to localize and quantify damages of multi-story or highrise buildings. The slopes and curvatures of mode shapes, which are highly sensitive to local damages, are used to localize the damages. Subsequently, the limited measured natural frequencies, which have a better accuracy than the other mode parameters, are used to quantify the extent of damages within the potential damaged locations. The experimental results of a 3-story experimental building demonstrate that the single or multiple damages of buildings, either slight or severe, can be correctly localized by using only the slope or curvature of mode shape in one of the lower modes, in which the change of natural frequency is the largest, and can be accurately quantified by the limited measured natural frequencies with noise pollution.展开更多
A separation method is proposed to design and improve shock absorber according to the characteristics of each force. The method is validated by rig test. The force data measured during rig test is the resultant force ...A separation method is proposed to design and improve shock absorber according to the characteristics of each force. The method is validated by rig test. The force data measured during rig test is the resultant force of damping force, rebound force produced by pressed air, and friction force. Different characters of damping force, air rebound force and friction force can be applied to seperate each force from others. A massive produced air filling shock absorber is adopted for the validation. The statistic test is used to get the displacement-force curves. The data are used as the input of separation calculation. Then the tests are carried out again to obtain the force data without air rebound force. The force without air rebound is compared to the data derived from the former tests with the separation method. The result shows that this method can separate the damping force and the air elastic force.展开更多
Water vapor monitoring system by Beidou satellite is a new detection system in meteorological department, which makes receiving amount of detected data and data storage and transmission pressure increase. Here, we try...Water vapor monitoring system by Beidou satellite is a new detection system in meteorological department, which makes receiving amount of detected data and data storage and transmission pressure increase. Here, we try to use data compression to relieve pressure. Compres- sion software of water vapor monitoring system by Beidou satellite can be designed into three components: real-time compression software, check compression software and manual compression software, which respectively completes the compression tasks under real-time receiving, in-time check and separate compression, thereby forming a perfect compression system. Taking the design of manual compression software as guide,and using c language to develop,compression test of original receiving data is conducted. Test result proves that the system can carry out batch auto- matic compression, and compression rate can reach 30% ,which can reach the target of saving space in a degree.展开更多
Testing is an integral part of software development.Current fastpaced system developments have rendered traditional testing techniques obsolete.Therefore,automated testing techniques are needed to adapt to such system...Testing is an integral part of software development.Current fastpaced system developments have rendered traditional testing techniques obsolete.Therefore,automated testing techniques are needed to adapt to such system developments speed.Model-based testing(MBT)is a technique that uses system models to generate and execute test cases automatically.It was identified that the test data generation(TDG)in many existing model-based test case generation(MB-TCG)approaches were still manual.An automatic and effective TDG can further reduce testing cost while detecting more faults.This study proposes an automated TDG approach in MB-TCG using the extended finite state machine model(EFSM).The proposed approach integrates MBT with combinatorial testing.The information available in an EFSM model and the boundary value analysis strategy are used to automate the domain input classifications which were done manually by the existing approach.The results showed that the proposed approach was able to detect 6.62 percent more faults than the conventionalMB-TCG but at the same time generated 43 more tests.The proposed approach effectively detects faults,but a further treatment to the generated tests such as test case prioritization should be done to increase the effectiveness and efficiency of testing.展开更多
We propose a new nonparametric test based on the rank difference between the paired sample for testing the equality of the marginal distributions from a bivariate distribution. We also consider a modification of the n...We propose a new nonparametric test based on the rank difference between the paired sample for testing the equality of the marginal distributions from a bivariate distribution. We also consider a modification of the novel nonparametric test based on the test proposed by Baumgartern, Weiβ, and Schindler (1998). An extensive numerical power comparison for various parametric and nonparametric tests was conducted under a wide range of bivariate distributions for small sample sizes. The two new nonparametric tests have comparable power to the paired t test for the data simulated from bivariate normal distributions, and are generally more powerful than the paired t test and other commonly used nonparametric tests in several important bivariate distributions.展开更多
Big Data is reforming many industrial domains by providing decision support through analyzing large data volumes.Big Data testing aims to ensure that Big Data systems run smoothly and error-free while maintaining the ...Big Data is reforming many industrial domains by providing decision support through analyzing large data volumes.Big Data testing aims to ensure that Big Data systems run smoothly and error-free while maintaining the performance and quality of data.However,because of the diversity and complexity of data,testing Big Data is challenging.Though numerous research efforts deal with Big Data testing,a comprehensive review to address testing techniques and challenges of BigData is not available as yet.Therefore,we have systematically reviewed the Big Data testing techniques’evidence occurring in the period 2010–2021.This paper discusses testing data processing by highlighting the techniques used in every processing phase.Furthermore,we discuss the challenges and future directions.Our findings show that diverse functional,non-functional and combined(functional and non-functional)testing techniques have been used to solve specific problems related to Big Data.At the same time,most of the testing challenges have been faced during the MapReduce validation phase.In addition,the combinatorial testing technique is one of the most applied techniques in combination with other techniques(i.e.,random testing,mutation testing,input space partitioning and equivalence testing)to find various functional faults through Big Data testing.展开更多
基金supported in part by NIH grants R01NS39600,U01MH114829RF1MH128693(to GAA)。
文摘Many fields,such as neuroscience,are experiencing the vast prolife ration of cellular data,underscoring the need fo r organizing and interpreting large datasets.A popular approach partitions data into manageable subsets via hierarchical clustering,but objective methods to determine the appropriate classification granularity are missing.We recently introduced a technique to systematically identify when to stop subdividing clusters based on the fundamental principle that cells must differ more between than within clusters.Here we present the corresponding protocol to classify cellular datasets by combining datadriven unsupervised hierarchical clustering with statistical testing.These general-purpose functions are applicable to any cellular dataset that can be organized as two-dimensional matrices of numerical values,including molecula r,physiological,and anatomical datasets.We demonstrate the protocol using cellular data from the Janelia MouseLight project to chara cterize morphological aspects of neurons.
基金Supported by the National Natural Science Foundation of China(61076019,61106018)the Aeronautical Science Foundation of China(20115552031)+3 种基金the China Postdoctoral Science Foundation(20100481134)the Jiangsu Province Key Technology R&D Program(BE2010003)the Nanjing University of Aeronautics and Astronautics Research Funding(NS2010115)the Nanjing University of Aeronatics and Astronautics Initial Funding for Talented Faculty(1004-YAH10027)~~
文摘Test data compression and test resource partitioning (TRP) are essential to reduce the amount of test data in system-on-chip testing. A novel variable-to-variable-length compression codes is designed as advanced fre- quency-directed run-length (AFDR) codes. Different [rom frequency-directed run-length (FDR) codes, AFDR encodes both 0- and 1-runs and uses the same codes to the equal length runs. It also modifies the codes for 00 and 11 to improve the compression performance. Experimental results for ISCAS 89 benchmark circuits show that AFDR codes achieve higher compression ratio than FDR and other compression codes.
文摘This paper presents a new test data compression/decompression method for SoC testing,called hybrid run length codes. The method makes a full analysis of the factors which influence test parameters:compression ratio,test application time, and area overhead. To improve the compression ratio, the new method is based on variable-to-variable run length codes,and a novel algorithm is proposed to reorder the test vectors and fill the unspecified bits in the pre-processing step. With a novel on-chip decoder, low test application time and low area overhead are obtained by hybrid run length codes. Finally, an experimental comparison on ISCAS 89 benchmark circuits validates the proposed method
基金supported by Sustentation Program of National Ministries and Commissions of China (Grant No. 203020102)
文摘Data obtained from accelerated life testing (ALT) when there are two or more failure modes, which is commonly referred to as competing failure modes, are often incomplete. The incompleteness is mainly due to censoring, as well as masking which might be the case that the failure time is observed, but its corresponding failure mode is not identified. Because the identification of the failure mode may be expensive, or very difficult to investigate due to lack of appropriate diagnostics. A method is proposed for analyzing incomplete data of constant stress ALT with competing failure modes. It is assumed that failure modes have s-independent latent lifetimes and the log lifetime of each failure mode can be written as a linear function of stress. The parameters of the model are estimated by using the expectation maximum (EM) algorithm with incomplete data. Simulation studies are performed to check'model validity and investigate the properties of estimates. For further validation, the method is also illustrated by an example, which shows the process of analyze incomplete data from ALT of some insulation system. Because of considering the incompleteness of data in modeling and making use of the EM algorithm in estimating, the method becomes more flexible in ALT analysis.
基金support of the United States Department of Energy (DE-FE0026825, UCFER-University Coalition for Fossil Energy Research)
文摘We developed an inversion technique to determine in situ stresses for elliptical boreholes of arbitrary trajectory. In this approach, borehole geometry, drilling-induced fracture information, and other available leak-off test data were used to construct a mathematical model, which was in turn applied to finding the inverse of an overdetermined system of equations.The method has been demonstrated by a case study in the Appalachian Basin, USA. The calculated horizontal stresses are in reasonable agreement with the reported regional stress study of the area, although there are no field measurement data of the studied well for direct calibration. The results also indicate that 2% of axis difference in the elliptical borehole geometry can cause a 5% difference in minimum horizontal stress calculation and a 10% difference in maximum horizontal stress calculation.
基金Project supported by the Natural Science Foundation of China(No. 50378041) and the Specialized Research Fund for the Doc-toral Program of Higher Education (No. 20030487016), China
文摘A new structural damage identification method using limited test static displacement based on grey system theory is proposed in this paper. The grey relation coefficient of displacement curvature is defined and used to locate damage in the structure, and an iterative estimation scheme for solving nonlinear optimization programming problems based on the quadratic programming technique is used to identify the damage magnitude. A numerical example of a cantilever beam with single or multiple damages is used to examine the capability of the proposed grey-theory-based method to localize and identify damages. The factors of meas-urement noise and incomplete test data are also discussed. The numerical results showed that the damage in the structure can be localized correctly through using the grey-related coefficient of displacement curvature, and the damage magnitude can be iden-tified with a high degree of accuracy, regardless of the number of measured displacement nodes. This proposed method only requires limited static test data, which is easily available in practice, and has wide applications in structural damage detection.
基金The project supported by NNSFC (19631040), NSSFC (04BTJ002) and the grant for post-doctor fellows in SELF.
文摘In this paper, it is discussed that two tests for varying dispersion of binomial data in the framework of nonlinear logistic models with random effects, which are widely used in analyzing longitudinal binomial data. One is the individual test and power calculation for varying dispersion through testing the randomness of cluster effects, which is extensions of Dean(1992) and Commenges et al (1994). The second test is the composite test for varying dispersion through simultaneously testing the randomness of cluster effects and the equality of random-effect means. The score test statistics are constructed and expressed in simple, easy to use, matrix formulas. The authors illustrate their test methods using the insecticide data (Giltinan, Capizzi & Malani (1988)).
基金supported by the National Natural Science Foundation of China(71401134 71571144+1 种基金 71171164)the Program of International Cooperation and Exchanges in Science and Technology Funded by Shaanxi Province(2016KW-033)
文摘Under Type-Ⅱ progressively hybrid censoring, this paper discusses statistical inference and optimal design on stepstress partially accelerated life test for hybrid system in presence of masked data. It is assumed that the lifetime of the component in hybrid systems follows independent and identical modified Weibull distributions. The maximum likelihood estimations(MLEs)of the unknown parameters, acceleration factor and reliability indexes are derived by using the Newton-Raphson algorithm. The asymptotic variance-covariance matrix and the approximate confidence intervals are obtained based on normal approximation to the asymptotic distribution of MLEs of model parameters. Moreover,two bootstrap confidence intervals are constructed by using the parametric bootstrap method. The optimal time of changing stress levels is determined under D-optimality and A-optimality criteria.Finally, the Monte Carlo simulation study is carried out to illustrate the proposed procedures.
基金supported by the Program of Introducing Talents of Disciplines to Universities of the Ministry of Education and State Administration of the Foreign Experts Affairs of China (the 111 Project, Grant No.B08048)the Special Basic Research Fund for Methodology in Hydrology of the Ministry of Sciences and Technology of China (Grant No. 2011IM011000)
文摘The question of how to choose a copula model that best fits a given dataset is a predominant limitation of the copula approach, and the present study aims to investigate the techniques of goodness-of-fit tests for multi-dimensional copulas. A goodness-of-fit test based on Rosenblatt's transformation was mathematically expanded from two dimensions to three dimensions and procedures of a bootstrap version of the test were provided. Through stochastic copula simulation, an empirical application of historical drought data at the Lintong Gauge Station shows that the goodness-of-fit tests perform well, revealing that both trivariate Gaussian and Student t copulas are acceptable for modeling the dependence structures of the observed drought duration, severity, and peak. The goodness-of-fit tests for multi-dimensional copulas can provide further support and help a lot in the potential applications of a wider range of copulas to describe the associations of correlated hydrological variables. However, for the application of copulas with the number of dimensions larger than three, more complicated computational efforts as well as exploration and parameterization of corresponding copulas are required.
文摘The purpose of this study is to design a Moroccan Trail Making Test B;explore the effects of age, education and gender on the performance of the Trail Making Test (TMT);and provide normative information in Moroccan subjects. Our normalization study was conducted on 348 subjects (156 female and 192 male). The subjects were classified into four groups based on age (18 - 39 years, 40 - 59 years, 60 - 69 years and ≥70 years), and three groups based on educational level (3 - 6 years, 7 - 10 years and ≥11 years). The data were analyzed using descriptive statistics through SPSS. The results displayed that increasing age and decreasing levels of education significantly result in a decreased performance on the Trail A, Moroccan Trail B and English Trail B. Only 229 bilingual subjects among the 348 initial subjects completed both versions of Trail B. There was no significant difference on performance between Moroccan Trail B and English Trail B for these subjects.
文摘Objective:Saccades accompanied by normal gain in video head impulse tests(vHIT)are often observed in patients with vestibular migraine(VM).However,they are not considered as an independent indicator,reducing their utility in diagnosing VM.To better understand clinical features of VM,it is necessary to understand raw saccades data.Methods:Fourteen patients with confirmed VM,45 patients with probable VM(p-VM)and 14 agematched healthy volunteers were included in this study.Clinical findings related to spontaneous nystagmus(SN),positional nystagmus(PN),head-shaking nystagmus(HSN),caloric test and vHIT were recorded.Raw saccades data were exported and numbered by their sequences,and their features analyzed.Results:VM patients showed no SN,PN or HSN,and less than half of them showed unilateral weakness(UW)on caloric test.The first saccades from lateral semicircular canal stimulation were the most predominant for both left and right sides.Neither velocity nor time parameters were significantly different when compared between the two sides.Most VM patients(86%)exhibited small saccades,around 35%of the head peak velocity,with a latency of 200e400 ms.Characteristics of saccades were similar in patients with p-VM.Only four normal subjects showed saccades,all unilateral and seemingly random.Conclusions:Small saccades involving bilateral semicircular canals with a scattered distribution pattern are common in patients with VM and p-VM.
文摘By analyzing some existing test data generation methods, a new automated test data generation approach was presented. The linear predicate functions on a given path was directly used to construct a linear constrain system for input variables. Only when the predicate function is nonlinear, does the linear arithmetic representation need to be computed. If the entire predicate functions on the given path are linear, either the desired test data or the guarantee that the path is infeasible can be gotten from the solution of the constrain system. Otherwise, the iterative refining for the input is required to obtain the desired test data. Theoretical analysis and test results show that the approach is simple and effective, and takes less computation. The scheme can also be used to generate path-based test data for the programs with arrays and loops.
文摘PL/SQL is the most common language for ORACLE database application. It allows the developer to create stored program units (Procedures, Functions, and Packages) to improve software reusability and hide the complexity of the execution of a specific operation behind a name. Also, it acts as an interface between SQL database and DEVELOPER. Therefore, it is important to test these modules that consist of procedures and functions. In this paper, a new genetic algorithm (GA), as search technique, is used in order to find the required test data according to branch criteria to test stored PL/SQL program units. The experimental results show that this was not fully achieved, such that the test target in some branches is not reached and the coverage percentage is 98%. A problem rises when target branch is depending on data retrieved from tables;in this case, GA is not able to generate test cases for this branch.
基金supported by the National Key Program for Developing Basic Sciences under Grant 2012CB417202the National Natural Science Foundation of China under Grant No. 41175038, No. 41305088 and No. 41075023+4 种基金the Meteorological Special Project "Radar network observation technology and QC"the CMA Key project "Radar Operational Software Engineering"the Chinese Academy of Meteorological Sciences Basic ScientificOperational Projects "Observation and retrieval methods of micro-physics and dynamic parameters of cloud and precipitation with multi-wavelength Remote Sensing"Project of the State Key Laboratory of Severe Weather grant 2012LASW-B04
文摘A variety of faulty radar echoes may cause serious problems with radar data applications,especially radar data assimilation and quantitative precipitation estimates.In this study,"test pattern" caused by test signal or radar hardware failures in CINRAD (China New Generation Weather Radar) SA and SB radar operational observations are investigated.In order to distinguish the test pattern from other types of radar echoes,such as precipitation,clear air and other non-meteorological echoes,five feature parameters including the effective reflectivity data percentage (Rz),velocity RF (range folding) data percentage (RRF),missing velocity data percentage (RM),averaged along-azimuth reflectivity fluctuation (RNr,z) and averaged along-beam reflectivity fluctuation (RNa,z) are proposed.Based on the fuzzy logic method,a test pattern identification algorithm is developed,and the statistical results from all the different kinds of radar echoes indicate the performance of the algorithm.Analysis of two typical cases with heavy precipitation echoes located inside the test pattern are performed.The statistical results show that the test pattern identification algorithm performs well,since the test pattern is recognized in most cases.Besides,the algorithm can effectively remove the test pattern signal and retain strong precipitation echoes in heavy rainfall events.
基金Project supported by the National Natural Science Foundation of China (No. 50378041) Specialized Research Fund for Doctoral Programs of Higher Education (No. 20030487016).
文摘Many multi-story or highrise buildings consisting of a number of identical stories are usually considered as periodic spring-mass systems. The general expressions of natural frequencies, mode shapes, slopes and curvatures of mode shapes of the periodic spring-mass system by utilizing the periodic structure theory are derived in this paper. The sensitivities of these mode parameters with respect to structural damages, which do not depend on the physical parameters of the original structures, are obtained. Based on the sensitivity analysis of these mode parameters, a two-stage method is proposed to localize and quantify damages of multi-story or highrise buildings. The slopes and curvatures of mode shapes, which are highly sensitive to local damages, are used to localize the damages. Subsequently, the limited measured natural frequencies, which have a better accuracy than the other mode parameters, are used to quantify the extent of damages within the potential damaged locations. The experimental results of a 3-story experimental building demonstrate that the single or multiple damages of buildings, either slight or severe, can be correctly localized by using only the slope or curvature of mode shape in one of the lower modes, in which the change of natural frequency is the largest, and can be accurately quantified by the limited measured natural frequencies with noise pollution.
文摘A separation method is proposed to design and improve shock absorber according to the characteristics of each force. The method is validated by rig test. The force data measured during rig test is the resultant force of damping force, rebound force produced by pressed air, and friction force. Different characters of damping force, air rebound force and friction force can be applied to seperate each force from others. A massive produced air filling shock absorber is adopted for the validation. The statistic test is used to get the displacement-force curves. The data are used as the input of separation calculation. Then the tests are carried out again to obtain the force data without air rebound force. The force without air rebound is compared to the data derived from the former tests with the separation method. The result shows that this method can separate the damping force and the air elastic force.
文摘Water vapor monitoring system by Beidou satellite is a new detection system in meteorological department, which makes receiving amount of detected data and data storage and transmission pressure increase. Here, we try to use data compression to relieve pressure. Compres- sion software of water vapor monitoring system by Beidou satellite can be designed into three components: real-time compression software, check compression software and manual compression software, which respectively completes the compression tasks under real-time receiving, in-time check and separate compression, thereby forming a perfect compression system. Taking the design of manual compression software as guide,and using c language to develop,compression test of original receiving data is conducted. Test result proves that the system can carry out batch auto- matic compression, and compression rate can reach 30% ,which can reach the target of saving space in a degree.
基金The research was funded by Universiti Teknologi Malaysia(UTM)and the MalaysianMinistry of Higher Education(MOHE)under the Industry-International Incentive Grant Scheme(IIIGS)(Vote Number:Q.J130000.3651.02M67 and Q.J130000.3051.01M86)the Aca-demic Fellowship Scheme(SLAM).
文摘Testing is an integral part of software development.Current fastpaced system developments have rendered traditional testing techniques obsolete.Therefore,automated testing techniques are needed to adapt to such system developments speed.Model-based testing(MBT)is a technique that uses system models to generate and execute test cases automatically.It was identified that the test data generation(TDG)in many existing model-based test case generation(MB-TCG)approaches were still manual.An automatic and effective TDG can further reduce testing cost while detecting more faults.This study proposes an automated TDG approach in MB-TCG using the extended finite state machine model(EFSM).The proposed approach integrates MBT with combinatorial testing.The information available in an EFSM model and the boundary value analysis strategy are used to automate the domain input classifications which were done manually by the existing approach.The results showed that the proposed approach was able to detect 6.62 percent more faults than the conventionalMB-TCG but at the same time generated 43 more tests.The proposed approach effectively detects faults,but a further treatment to the generated tests such as test case prioritization should be done to increase the effectiveness and efficiency of testing.
文摘We propose a new nonparametric test based on the rank difference between the paired sample for testing the equality of the marginal distributions from a bivariate distribution. We also consider a modification of the novel nonparametric test based on the test proposed by Baumgartern, Weiβ, and Schindler (1998). An extensive numerical power comparison for various parametric and nonparametric tests was conducted under a wide range of bivariate distributions for small sample sizes. The two new nonparametric tests have comparable power to the paired t test for the data simulated from bivariate normal distributions, and are generally more powerful than the paired t test and other commonly used nonparametric tests in several important bivariate distributions.
基金Science Foundation Ireland(SFI)under Grant Number SFI/16/RC/3918(Confirm)and Marie Sklodowska Curie Grant agreement No.847577 co-fundedthe European Regional Development Fund.Wasif Afzal has received funding from the European Union’s Horizon 2020 research and innovation program under CMC,2023,vol.74,no.22767 Grant agreement Nos.871319,957212from the ECSEL Joint Undertaking(JU)under Grant agreement No 101007350.
文摘Big Data is reforming many industrial domains by providing decision support through analyzing large data volumes.Big Data testing aims to ensure that Big Data systems run smoothly and error-free while maintaining the performance and quality of data.However,because of the diversity and complexity of data,testing Big Data is challenging.Though numerous research efforts deal with Big Data testing,a comprehensive review to address testing techniques and challenges of BigData is not available as yet.Therefore,we have systematically reviewed the Big Data testing techniques’evidence occurring in the period 2010–2021.This paper discusses testing data processing by highlighting the techniques used in every processing phase.Furthermore,we discuss the challenges and future directions.Our findings show that diverse functional,non-functional and combined(functional and non-functional)testing techniques have been used to solve specific problems related to Big Data.At the same time,most of the testing challenges have been faced during the MapReduce validation phase.In addition,the combinatorial testing technique is one of the most applied techniques in combination with other techniques(i.e.,random testing,mutation testing,input space partitioning and equivalence testing)to find various functional faults through Big Data testing.