期刊文献+
共找到3,676篇文章
< 1 2 184 >
每页显示 20 50 100
Big Data Analytics:Deep Content-Based Prediction with Sampling Perspective
1
作者 Waleed Albattah Saleh Albahli 《Computer Systems Science & Engineering》 SCIE EI 2023年第4期531-544,共14页
The world of information technology is more than ever being flooded with huge amounts of data,nearly 2.5 quintillion bytes every day.This large stream of data is called big data,and the amount is increasing each day.T... The world of information technology is more than ever being flooded with huge amounts of data,nearly 2.5 quintillion bytes every day.This large stream of data is called big data,and the amount is increasing each day.This research uses a technique called sampling,which selects a representative subset of the data points,manipulates and analyzes this subset to identify patterns and trends in the larger dataset being examined,and finally,creates models.Sampling uses a small proportion of the original data for analysis and model training,so that it is relatively faster while maintaining data integrity and achieving accurate results.Two deep neural networks,AlexNet and DenseNet,were used in this research to test two sampling techniques,namely sampling with replacement and reservoir sampling.The dataset used for this research was divided into three classes:acceptable,flagged as easy,and flagged as hard.The base models were trained with the whole dataset,whereas the other models were trained on 50%of the original dataset.There were four combinations of model and sampling technique.The F-measure for the AlexNet model was 0.807 while that for the DenseNet model was 0.808.Combination 1 was the AlexNet model and sampling with replacement,achieving an average F-measure of 0.8852.Combination 3 was the AlexNet model and reservoir sampling.It had an average F-measure of 0.8545.Combination 2 was the DenseNet model and sampling with replacement,achieving an average F-measure of 0.8017.Finally,combination 4 was the DenseNet model and reservoir sampling.It had an average F-measure of 0.8111.Overall,we conclude that both models trained on a sampled dataset gave equal or better results compared to the base models,which used the whole dataset. 展开更多
关键词 sampling big data deep learning AlexNet DenseNet
下载PDF
Effective data sampling strategies and boundary condition constraints of physics-informed neural networks for identifying material properties in solid mechanics
2
作者 W.WU M.DANEKER +2 位作者 M.A.JOLLEY K.T.TURNER L.LU 《Applied Mathematics and Mechanics(English Edition)》 SCIE EI CSCD 2023年第7期1039-1068,共30页
Material identification is critical for understanding the relationship between mechanical properties and the associated mechanical functions.However,material identification is a challenging task,especially when the ch... Material identification is critical for understanding the relationship between mechanical properties and the associated mechanical functions.However,material identification is a challenging task,especially when the characteristic of the material is highly nonlinear in nature,as is common in biological tissue.In this work,we identify unknown material properties in continuum solid mechanics via physics-informed neural networks(PINNs).To improve the accuracy and efficiency of PINNs,we develop efficient strategies to nonuniformly sample observational data.We also investigate different approaches to enforce Dirichlet-type boundary conditions(BCs)as soft or hard constraints.Finally,we apply the proposed methods to a diverse set of time-dependent and time-independent solid mechanic examples that span linear elastic and hyperelastic material space.The estimated material parameters achieve relative errors of less than 1%.As such,this work is relevant to diverse applications,including optimizing structural integrity and developing novel materials. 展开更多
关键词 solid mechanics material identification physics-informed neural network(PINN) data sampling boundary condition(BC)constraint
下载PDF
Scaling up the DBSCAN Algorithm for Clustering Large Spatial Databases Based on Sampling Technique 被引量:9
3
作者 Guan Ji hong 1, Zhou Shui geng 2, Bian Fu ling 3, He Yan xiang 1 1. School of Computer, Wuhan University, Wuhan 430072, China 2.State Key Laboratory of Software Engineering, Wuhan University, Wuhan 430072, China 3.College of Remote Sensin 《Wuhan University Journal of Natural Sciences》 CAS 2001年第Z1期467-473,共7页
Clustering, in data mining, is a useful technique for discovering interesting data distributions and patterns in the underlying data, and has many application fields, such as statistical data analysis, pattern recogni... Clustering, in data mining, is a useful technique for discovering interesting data distributions and patterns in the underlying data, and has many application fields, such as statistical data analysis, pattern recognition, image processing, and etc. We combine sampling technique with DBSCAN algorithm to cluster large spatial databases, and two sampling based DBSCAN (SDBSCAN) algorithms are developed. One algorithm introduces sampling technique inside DBSCAN, and the other uses sampling procedure outside DBSCAN. Experimental results demonstrate that our algorithms are effective and efficient in clustering large scale spatial databases. 展开更多
关键词 spatial databases data mining CLUSTERING sampling DBSCAN algorithm
下载PDF
Over-sampling algorithm for imbalanced data classification 被引量:6
4
作者 XU Xiaolong CHEN Wen SUN Yanfei 《Journal of Systems Engineering and Electronics》 SCIE EI CSCD 2019年第6期1182-1191,共10页
For imbalanced datasets, the focus of classification is to identify samples of the minority class. The performance of current data mining algorithms is not good enough for processing imbalanced datasets. The synthetic... For imbalanced datasets, the focus of classification is to identify samples of the minority class. The performance of current data mining algorithms is not good enough for processing imbalanced datasets. The synthetic minority over-sampling technique(SMOTE) is specifically designed for learning from imbalanced datasets, generating synthetic minority class examples by interpolating between minority class examples nearby. However, the SMOTE encounters the overgeneralization problem. The densitybased spatial clustering of applications with noise(DBSCAN) is not rigorous when dealing with the samples near the borderline.We optimize the DBSCAN algorithm for this problem to make clustering more reasonable. This paper integrates the optimized DBSCAN and SMOTE, and proposes a density-based synthetic minority over-sampling technique(DSMOTE). First, the optimized DBSCAN is used to divide the samples of the minority class into three groups, including core samples, borderline samples and noise samples, and then the noise samples of minority class is removed to synthesize more effective samples. In order to make full use of the information of core samples and borderline samples,different strategies are used to over-sample core samples and borderline samples. Experiments show that DSMOTE can achieve better results compared with SMOTE and Borderline-SMOTE in terms of precision, recall and F-value. 展开更多
关键词 imbalanced data density-based spatial clustering of applications with noise(DBSCAN) synthetic minority over sampling technique(SMOTE) over-sampling.
下载PDF
Brittleness index predictions from Lower Barnett Shale well-log data applying an optimized data matching algorithm at various sampling densities 被引量:1
5
作者 David A.Wood 《Geoscience Frontiers》 SCIE CAS CSCD 2021年第6期444-457,共14页
The capability of accurately predicting mineralogical brittleness index (BI) from basic suites of well logs is desirable as it provides a useful indicator of the fracability of tight formations.Measuring mineralogical... The capability of accurately predicting mineralogical brittleness index (BI) from basic suites of well logs is desirable as it provides a useful indicator of the fracability of tight formations.Measuring mineralogical components in rocks is expensive and time consuming.However,the basic well log curves are not well correlated with BI so correlation-based,machine-learning methods are not able to derive highly accurate BI predictions using such data.A correlation-free,optimized data-matching algorithm is configured to predict BI on a supervised basis from well log and core data available from two published wells in the Lower Barnett Shale Formation (Texas).This transparent open box (TOB) algorithm matches data records by calculating the sum of squared errors between their variables and selecting the best matches as those with the minimum squared errors.It then applies optimizers to adjust weights applied to individual variable errors to minimize the root mean square error (RMSE)between calculated and predicted (BI).The prediction accuracy achieved by TOB using just five well logs (Gr,ρb,Ns,Rs,Dt) to predict BI is dependent on the density of data records sampled.At a sampling density of about one sample per 0.5 ft BI is predicted with RMSE~0.056 and R^(2)~0.790.At a sampling density of about one sample per0.1 ft BI is predicted with RMSE~0.008 and R^(2)~0.995.Adding a stratigraphic height index as an additional (sixth)input variable method improves BI prediction accuracy to RMSE~0.003 and R^(2)~0.999 for the two wells with only 1 record in 10,000 yielding a BI prediction error of>±0.1.The model has the potential to be applied in an unsupervised basis to predict BI from basic well log data in surrounding wells lacking mineralogical measurements but with similar lithofacies and burial histories.The method could also be extended to predict elastic rock properties in and seismic attributes from wells and seismic data to improve the precision of brittleness index and fracability mapping spatially. 展开更多
关键词 Well-log brittleness index estimates data record sample densities Zoomed-in data interpolation Correlation-free prediction analysis Mineralogical and elastic influences
下载PDF
Characteristics analysis on high density spatial sampling seismic data 被引量:11
6
作者 Cai Xiling Liu Xuewei +1 位作者 Deng Chunyan Lv Yingme 《Applied Geophysics》 SCIE CSCD 2006年第1期48-54,共7页
中国的大陆人免职盆被复杂地质的结构和各种各样的水库岩性学描绘。因此,高精确探索方法被需要。高密度空间采样是一种新技术增加地震探索的精确性。我们简短讨论点来源和接收装置技术,分析高密度在 situ 方法的空间采样,介绍吉季斯&... 中国的大陆人免职盆被复杂地质的结构和各种各样的水库岩性学描绘。因此,高精确探索方法被需要。高密度空间采样是一种新技术增加地震探索的精确性。我们简短讨论点来源和接收装置技术,分析高密度在 situ 方法的空间采样,介绍吉季斯· J · O 介绍的对称的采样原则。Vermeer,并且讨论高密度从波浪地连续性的观点的空间采样技术。我们强调高密度的分析空间采样特征,包括高密度,首先,裂缝为近的表面结构的调查有利,改善静态的修正精确,在到增加的短偏移量的稠密的接收装置间距的使用在浅深度的有效范围,和思考成像的精确性。协调噪音不是 aliased 和噪音分析精确,抑制作为结果增加。空间采样提高的高密度各种各样的数学变换的波浪地连续性和精确性,它受益飘动地分离。最后,我们指出空间采样技术是的高密度的困难的部分处理的数据。更多的研究需要在分析并且处理地震数据的巨大的数量的方法上被做。 展开更多
关键词 高密度空间采样 对称采样 静校正 噪声压制 波场分离 数据处理 地震勘探
下载PDF
RAD-seq data reveals robust phylogeny and morphological evolutionary history of Rhododendron
7
作者 Yuanting Shen Gang Yao +6 位作者 Yunfei Li Xiaoling Tian Shiming Li Nian Wang Chengjun Zhang Fei Wang Yongpeng Ma 《Horticultural Plant Journal》 SCIE CAS CSCD 2024年第3期866-878,共13页
Rhododendron is famous for its high ornamental value.However,the genus is taxonomically difficult and the relationships within Rhododendron remain unresolved.In addition,the origin of key morphological characters with... Rhododendron is famous for its high ornamental value.However,the genus is taxonomically difficult and the relationships within Rhododendron remain unresolved.In addition,the origin of key morphological characters with high horticulture value need to be explored.Both problems largely hinder utilization of germplasm resources.Most studies attempted to disentangle the phylogeny of Rhododendron,but only used a few genomic markers and lacked large-scale sampling,resulting in low clade support and contradictory phylogenetic signals.Here,we used restriction-site associated DNA sequencing(RAD-seq)data and morphological traits for 144 species of Rhododendron,representing all subgenera and most sections and subsections of this species-rich genus,to decipher its intricate evolutionary history and reconstruct ancestral state.Our results revealed high resolutions at subgenera and section levels of Rhododendron based on RAD-seq data.Both optimal phylogenetic tree and split tree recovered five lineages among Rhododendron.Subg.Therorhodion(cladeⅠ)formed the basal lineage.Subg.Tsutsusi and Azaleastrum formed cladeⅡand had sister relationships.CladeⅢincluded all scaly rhododendron species.Subg.Pentanthera(cladeⅣ)formed a sister group to Subg.Hymenanthes(cladeⅤ).The results of ancestral state reconstruction showed that Rhododendron ancestor was a deciduous woody plant with terminal inflorescence,ten stamens,leaf blade without scales and broadly funnelform corolla with pink or purple color.This study shows significant distinguishability to resolve the evolutionary history of Rhododendron based on high clade support of phylogenetic tree constructed by RAD-seq data.It also provides an example to resolve discordant signals in phylogenetic trees and demonstrates the application feasibility of RAD-seq with large amounts of missing data in deciphering intricate evolutionary relationships.Additionally,the reconstructed ancestral state of six important characters provides insights into the innovation of key characters in Rhododendron. 展开更多
关键词 RHODODENDRON RAD-seq Missing data Quartet sampling(QS) Ancestral state reconstruction
下载PDF
Power Analysis and Sample Size Determination for Crossover Trials with Application to Bioequivalence Assessment of Topical Ophthalmic Drugs Using Serial Sampling Pharmacokinetic Data
8
作者 YU Yong Pei YAN Xiao Yan +1 位作者 YAO Chen XIA Jie Lai 《Biomedical and Environmental Sciences》 SCIE CAS CSCD 2019年第8期614-623,共10页
Objective To develop methods for determining a suitable sample size for bioequivalence assessment of generic topical ophthalmic drugs using crossover design with serial sampling schemes.Methods The power functions of ... Objective To develop methods for determining a suitable sample size for bioequivalence assessment of generic topical ophthalmic drugs using crossover design with serial sampling schemes.Methods The power functions of the Fieller-type confidence interval and the asymptotic confidence interval in crossover designs with serial-sampling data are here derived.Simulation studies were conducted to evaluate the derived power functions.Results Simulation studies show that two power functions can provide precise power estimates when normality assumptions are satisfied and yield conservative estimates of power in cases when data are log-normally distributed.The intra-correlation showed a positive correlation with the power of the bioequivalence test.When the expected ratio of the AUCs was less than or equal to 1, the power of the Fieller-type confidence interval was larger than the asymptotic confidence interval.If the expected ratio of the AUCs was larger than 1, the asymptotic confidence interval had greater power.Sample size can be calculated through numerical iteration with the derived power functions.Conclusion The Fieller-type power function and the asymptotic power function can be used to determine sample sizes of crossover trials for bioequivalence assessment of topical ophthalmic drugs. 展开更多
关键词 Serial-sampling data CROSSOVER design TOPICAL OPHTHALMIC drug BIOEQUIVALENCE sample size
下载PDF
Novel Stability Criteria for Sampled-Data Systems With Variable Sampling Periods 被引量:2
9
作者 Hanyong Shao Jianrong Zhao Dan Zhang 《IEEE/CAA Journal of Automatica Sinica》 EI CSCD 2020年第1期257-262,共6页
This paper is concerned with a novel Lyapunovlike functional approach to the stability of sampled-data systems with variable sampling periods. The Lyapunov-like functional has four striking characters compared to usua... This paper is concerned with a novel Lyapunovlike functional approach to the stability of sampled-data systems with variable sampling periods. The Lyapunov-like functional has four striking characters compared to usual ones. First, it is time-dependent. Second, it may be discontinuous. Third, not every term of it is required to be positive definite. Fourth, the Lyapunov functional includes not only the state and the sampled state but also the integral of the state. By using a recently reported inequality to estimate the derivative of this Lyapunov functional, a sampled-interval-dependent stability criterion with reduced conservatism is obtained. The stability criterion is further extended to sampled-data systems with polytopic uncertainties. Finally, three examples are given to illustrate the reduced conservatism of the stability criteria. 展开更多
关键词 Index Terms—Lyapunov functional sampled-data systems sampling-interval-dependent stability.
下载PDF
Minimum Data Sampling Method in the Inverse Scattering Problem
10
作者 Yu Wenhua(Res. Inst. of EM Field and Microwave Tech.), Southwest Jiaotong University, Chengdu 610031 ,ChinaPeng Zhongqiu(Beijng Remote Sensing and Information Institute),Beijing 100011,ChinaRen Lang(Res.inst. of EM Field and Microwave Tech.), Southwest J 《Journal of Modern Transportation》 1994年第2期114-118,共5页
Fourier transform is a basis of the analysis. This paper presents a kind ofmethod of minimum sampling data determined profile of the inverted object ininverse scattering.
关键词 inverse scattering nonuniqueness sampling data
下载PDF
Major Data of China's 1% National Population Sampling Survey,1995
11
《China Population Today》 1996年第4期7-8,共2页
MajorDataofChina′s1%NationalPopulationSamplingSurvey,1995Major Data of China's 1% National Population Sampling Survey,199...
关键词 Major data of China’s 1 National Population sampling Survey 1995
下载PDF
Consensus of heterogeneous multi-agent systems based on sampled data with a small sampling delay
12
作者 王娜 吴治海 彭力 《Chinese Physics B》 SCIE EI CAS CSCD 2014年第10期617-625,共9页
In this paper, consensus problems of heterogeneous multi-agent systems based on sampled data with a small sampling delay are considered. First, a consensus protocol based on sampled data with a small sampling delay fo... In this paper, consensus problems of heterogeneous multi-agent systems based on sampled data with a small sampling delay are considered. First, a consensus protocol based on sampled data with a small sampling delay for heterogeneous multi-agent systems is proposed. Then, the algebra graph theory, the matrix method, the stability theory of linear systems, and some other techniques are employed to derive the necessary and sufficient conditions guaranteeing heterogeneous multi-agent systems to asymptotically achieve the stationary consensus. Finally, simulations are performed to demonstrate the correctness of the theoretical results. 展开更多
关键词 heterogeneous multi-agent systems CONSENSUS samplED-data small sampling delay
下载PDF
基于依赖结构和Gibbs Sampling的离散数据聚类
13
作者 王双成 俞时权 程新章 《计算机工程》 CAS CSCD 北大核心 2006年第9期28-30,共3页
建立了一种新的离散数据聚类方法,该方法结合变量之间的依赖结构和Gibbs sampling进行离散数据聚类,能够显著提高抽样效率,并且避免使用EM算法进行聚类所带来的问题。试验结果表明,该方法能够有效地进行离散数据的聚类。
关键词 聚类 离散数据 依赖结构 GIBBS抽样 MDL标准
下载PDF
Fuzzy data envelopment analysis approach based on sample decision making units 被引量:11
14
作者 Muren Zhanxin Ma Wei Cui 《Journal of Systems Engineering and Electronics》 SCIE EI CSCD 2012年第3期399-407,共9页
The conventional data envelopment analysis (DEA) measures the relative efficiencies of a set of decision making units with exact values of inputs and outputs. In real-world prob- lems, however, inputs and outputs ty... The conventional data envelopment analysis (DEA) measures the relative efficiencies of a set of decision making units with exact values of inputs and outputs. In real-world prob- lems, however, inputs and outputs typically have some levels of fuzziness. To analyze a decision making unit (DMU) with fuzzy input/output data, previous studies provided the fuzzy DEA model and proposed an associated evaluating approach. Nonetheless, numerous deficiencies must still be improved, including the α- cut approaches, types of fuzzy numbers, and ranking techniques. Moreover, a fuzzy sample DMU still cannot be evaluated for the Fuzzy DEA model. Therefore, this paper proposes a fuzzy DEA model based on sample decision making unit (FSDEA). Five eval- uation approaches and the related algorithm and ranking methods are provided to test the fuzzy sample DMU of the FSDEA model. A numerical experiment is used to demonstrate and compare the results with those obtained using alternative approaches. 展开更多
关键词 fuzzy mathematical programming sample decision making unit fuzzy data envelopment analysis EFFICIENCY α-cut.
下载PDF
Improved Prediction and Reduction of Sampling Density for Soil Salinity by Different Geostatistical Methods 被引量:7
15
作者 LI Yan SHI Zhou +2 位作者 WU Ci-fang LI Hong-yi LI Feng 《Agricultural Sciences in China》 CAS CSCD 2007年第7期832-841,共10页
The spatial estimation for soil properties was improved and sampling intensities also decreased in terms of incorporated auxiliary data. In this study, kriging and two interpolation methods were proven well to estimat... The spatial estimation for soil properties was improved and sampling intensities also decreased in terms of incorporated auxiliary data. In this study, kriging and two interpolation methods were proven well to estimate auxiliary variables: cokriging and regression-kriging, and using the salinity data from the first two stages as auxiliary variables, the methods both improved the interpolation of soil salinity in coastal saline land. The prediction accuracy of the three methods was observed under different sampling density of the target variable by comparison with another group of 80 validation sample points, from which the root-mean-square error (RMSE) and correlation coefficient (r) between the predicted and measured values were calculated. The results showed, with the help of auxiliary data, whatever the sample size of the target variable may be, cokriging and regression-kriging performed better than ordinary kriging. Moreover, regression-kriging produced on average more accurate predictions than cokriging. Compared with the kriging results, cokriging improved the estimations by reducing RMSE from 23.3 to 29% and increasing r from 16.6 to 25.5%, regression-kriging improved the estimations by reducing RMSE from 25 to 41.5% and increasing r from 16.8 to 27.2%. Therefore, regression-kriging shows promise for improved prediction for soil salinity and reduction of soil sampling intensity considerably while maintaining high prediction accuracy. Moreover, in regression-kriging, the regression model can have any form, such as generalized linear models, non-linear models or tree-based models, which provide a possibility to include more ancillary variables. 展开更多
关键词 auxiliary data prediction precision sampling density soil salinity KRIGING
下载PDF
Data processing of small samples based on grey distance information approach 被引量:13
16
作者 Ke Hongfa, Chen Yongguang & Liu Yi 1. Coll. of Electronic Science and Engineering, National Univ. of Defense Technology, Changsha 410073, P. R. China 2. Unit 63880, Luoyang 471003, P. R. China 《Journal of Systems Engineering and Electronics》 SCIE EI CSCD 2007年第2期281-289,共9页
Data processing of small samples is an important and valuable research problem in the electronic equipment test. Because it is difficult and complex to determine the probability distribution of small samples, it is di... Data processing of small samples is an important and valuable research problem in the electronic equipment test. Because it is difficult and complex to determine the probability distribution of small samples, it is difficult to use the traditional probability theory to process the samples and assess the degree of uncertainty. Using the grey relational theory and the norm theory, the grey distance information approach, which is based on the grey distance information quantity of a sample and the average grey distance information quantity of the samples, is proposed in this article. The definitions of the grey distance information quantity of a sample and the average grey distance information quantity of the samples, with their characteristics and algorithms, are introduced. The correlative problems, including the algorithm of estimated value, the standard deviation, and the acceptance and rejection criteria of the samples and estimated results, are also proposed. Moreover, the information whitening ratio is introduced to select the weight algorithm and to compare the different samples. Several examples are given to demonstrate the application of the proposed approach. The examples show that the proposed approach, which has no demand for the probability distribution of small samples, is feasible and effective. 展开更多
关键词 data processing Grey theory Norm theory Small samples Uncertainty assessments Grey distance measure Information whitening ratio.
下载PDF
Sampled-data Consensus of Multi-agent Systems with General Linear Dynamics Based on a Continuous-time Mo del 被引量:15
17
作者 ZHANG Xie-Yan ZHANG Jing 《自动化学报》 EI CSCD 北大核心 2014年第11期2549-2555,共7页
关键词 多Agent系统 采样数据 连续时间 线性 Lyapunov函数 LMI方法 采样间隔 通用
下载PDF
A New Automated Method and Sample Data Flow for Analysis of Volatile Nitrosamines in Human Urine 被引量:1
18
作者 James A. Hodgson Tiffany H. Seyler +2 位作者 Ernest McGahee Stephen Arnstein Lanqing Wang 《American Journal of Analytical Chemistry》 2016年第2期165-178,共14页
Volatile nitrosamines (VNAs) are a group of compounds classified as probable (group 2A) and possible (group 2B) carcinogens in humans. Along with certain foods and contaminated drinking water, VNAs are detected at hig... Volatile nitrosamines (VNAs) are a group of compounds classified as probable (group 2A) and possible (group 2B) carcinogens in humans. Along with certain foods and contaminated drinking water, VNAs are detected at high levels in tobacco products and in both mainstream and side-stream smoke. Our laboratory monitors six urinary VNAs—N-nitrosodimethylamine (NDMA), N-nitrosomethylethylamine (NMEA), N-nitrosodiethylamine (NDEA), N-nitrosopiperidine (NPIP), N-nitrosopyrrolidine (NPYR), and N-nitrosomorpholine (NMOR)—using isotope dilution GC-MS/ MS (QQQ) for large population studies such as the National Health and Nutrition Examination Survey (NHANES). In this paper, we report for the first time a new automated sample preparation method to more efficiently quantitate these VNAs. Automation is done using Hamilton STAR<sup>TM</sup> and Caliper Staccato<sup>TM</sup> workstations. This new automated method reduces sample preparation time from 4 hours to 2.5 hours while maintaining precision (inter-run CV < 10%) and accuracy (85% - 111%). More importantly this method increases sample throughput while maintaining a low limit of detection (<10 pg/mL) for all analytes. A streamlined sample data flow was created in parallel to the automated method, in which samples can be tracked from receiving to final LIMs output with minimal human intervention, further minimizing human error in the sample preparation process. This new automated method and the sample data flow are currently applied in bio-monitoring of VNAs in the US non-institutionalized population NHANES 2013-2014 cycle. 展开更多
关键词 Volatile Nitrosamines AUTOMATION sample data Flow Gas Chromatography Tandem Mass Spectrometry
下载PDF
A New Approach to Robust Stability Analysis of Sampled-data Control Systems 被引量:6
19
作者 WANGGuang-Xiong LIUYan-Wen HEZhen WANGYong-Li 《自动化学报》 EI CSCD 北大核心 2005年第4期510-515,共6页
The lifting technique is now the most popular tool for dealing with sampled-data controlsystems. However, for the robust stability problem the system norm is not preserved by the liftingas expected. And the result is ... The lifting technique is now the most popular tool for dealing with sampled-data controlsystems. However, for the robust stability problem the system norm is not preserved by the liftingas expected. And the result is generally conservative under the small gain condition. The reason forthe norm di?erence by the lifting is that the state transition operator in the lifted system is zero inthis case. A new approach to the robust stability analysis is proposed. It is to use an equivalentdiscrete-time uncertainty to replace the continuous-time uncertainty. Then the general discretizedmethod can be used for the robust stability problem, and it is not conservative. Examples are givenin the paper. 展开更多
关键词 采样数据系统 稳定性 获得理论 自动控制
下载PDF
Actuator Fault Detection for Sampled-Data Systems in H_∞ Setting 被引量:4
20
作者 杨晓军 翁正新 田作华 《Journal of Shanghai Jiaotong university(Science)》 EI 2005年第2期131-134,共4页
Actuator fault detection for sampled-data systems was investigated from the viewpoint of jump systems. With the aid of a prior frequency information on fault, such a problem is converted to an augmented H_∞ filtering... Actuator fault detection for sampled-data systems was investigated from the viewpoint of jump systems. With the aid of a prior frequency information on fault, such a problem is converted to an augmented H_∞ filtering problem. A simple state-space approach is then proposed to deal with sampled-data actuator fault detection problem. Compared with the existed approaches, the proposed approach allows parameters of the sampled-data system being time-varying with consideration of measurement noise. 展开更多
关键词 取样数据 故障检测 调节器 黎卡提方程
下载PDF
上一页 1 2 184 下一页 到第
使用帮助 返回顶部