The theory and its method of machining parameter optimization for high-speed machining are studied. The machining data collected from workshops, labs and references are analyzed. An optimization method based on the ge...The theory and its method of machining parameter optimization for high-speed machining are studied. The machining data collected from workshops, labs and references are analyzed. An optimization method based on the genetic algorithm (GA) is investigated. Its calculation speed is faster than that of traditional optimization methods, and it is suitable for the machining parameter optimization in the automatic manufacturing system. Based on the theoretical studies, a system of machining parameter management and optimization is developed. The system can improve productivity of the high-speed machining centers.展开更多
The security-related problem during the data exchange is not considered in the SyncML protocol. To solve this problem, SyncML is enhanced with a secure data synchronization exchange service application program interfa...The security-related problem during the data exchange is not considered in the SyncML protocol. To solve this problem, SyncML is enhanced with a secure data synchronization exchange service application program interface (SDSXS-API) to ensure the reliability, the integrity, and the security in the data transmission and exchange. The design and the implementation of SDSXS-API are also given. The proposed APIs can be conveniently used as a uniform exchange interface for any related application programs. And their effectiveness is verified in the prototype mobile database system.展开更多
An attempt has been made to develop a distributed software infrastructure model for onboard data fusion system simulation, which is also applied to netted radar systems, onboard distributed detection systems and advan...An attempt has been made to develop a distributed software infrastructure model for onboard data fusion system simulation, which is also applied to netted radar systems, onboard distributed detection systems and advanced C3I systems. Two architectures are provided and verified: one is based on pure TCP/IP protocol and C/S model, and implemented with Winsock, the other is based on CORBA (common object request broker architecture). The performance of data fusion simulation system, i.e. reliability, flexibility and scalability, is improved and enhanced by two models. The study of them makes valuable explore on incorporating the distributed computation concepts into radar system simulation techniques.展开更多
An object oriented data modelling in computer aided design (CAD) databases is focused. Starting with the discussion of data modelling requirements for CAD applications, appropriate data modelling features are introdu...An object oriented data modelling in computer aided design (CAD) databases is focused. Starting with the discussion of data modelling requirements for CAD applications, appropriate data modelling features are introduced herewith. A feasible approach to select the “best” data model for an application is to analyze the data which has to be stored in the database. A data model is appropriate for modelling a given task if the information of the application environment can be easily mapped to the data model. Thus, the involved data are analyzed and then object oriented data model appropriate for CAD applications are derived. Based on the reviewed object oriented techniques applied in CAD, object oriented data modelling in CAD is addressed in details. At last 3D geometrical data models and implementation of their data model using the object oriented method are presented.展开更多
In order to study the temporal variations of correlations between two time series,a running correlation coefficient(RCC)could be used.An RCC is calculated for a given time window,and the window is then moved sequentia...In order to study the temporal variations of correlations between two time series,a running correlation coefficient(RCC)could be used.An RCC is calculated for a given time window,and the window is then moved sequentially through time.The current calculation method for RCCs is based on the general definition of the Pearson product-moment correlation coefficient,calculated with the data within the time window,which we call the local running correlation coefficient(LRCC).The LRCC is calculated via the two anomalies corresponding to the two local means,meanwhile,the local means also vary.It is cleared up that the LRCC reflects only the correlation between the two anomalies within the time window but fails to exhibit the contributions of the two varying means.To address this problem,two unchanged means obtained from all available data are adopted to calculate an RCC,which is called the synthetic running correlation coefficient(SRCC).When the anomaly variations are dominant,the two RCCs are similar.However,when the variations of the means are dominant,the difference between the two RCCs becomes obvious.The SRCC reflects the correlations of both the anomaly variations and the variations of the means.Therefore,the SRCCs from different time points are intercomparable.A criterion for the superiority of the RCC algorithm is that the average value of the RCC should be close to the global correlation coefficient calculated using all data.The SRCC always meets this criterion,while the LRCC sometimes fails.Therefore,the SRCC is better than the LRCC for running correlations.We suggest using the SRCC to calculate the RCCs.展开更多
Aggregate stability is a very important predictor of soil structure and strength, which influences soil erodibility. Several aggregate stability indices were selected erodibility of four soil properties from temperate...Aggregate stability is a very important predictor of soil structure and strength, which influences soil erodibility. Several aggregate stability indices were selected erodibility of four soil properties from temperate for estimating interrill types with contrasting and subtropical regions of China. This study was conducted to investigate how closely the soil interrill erodibility factor in the Water Erosion Prediction Project (WEPP) model relates to soil aggregate stability. The mass fractal dimension (FD), geometric mean diameter (GMD), mean weight diameter (MWD), and aggregate stability index (ASI) of soil aggregates were calculated. A rainfall simulator with a drainable flume (3.0 m long × 1.0 m wide × 0.5 m deep) was used at four slope gradients (5°,10 °,15° and 20°), and four rainfall intensities (0.6, 1.1, 1.7 and 2.5 mm/min). Results indicated that the interriU erodibility (Ki) values were significantly correlated to the indices of ASI, MWD, GMD, and FD computed from the aggregate wet-sieve data. The Kihad a strong positive correlation with FD, as well as a strong negative correlation with ASI, GMD, and MWD. Soils with a higher aggregate stability and lower fractal dimension have smaller Ki values. Stable soils were characterized by a high percentage of large aggregates and the erodible soils by a high percentage of smaller aggregates. The correlation coefficients of Ki with ASI and GMD were greater than those with FD and MWD, implying that both the ASI and GMD may be better alternative parameters for empirically predicting the soil Ki factor. ASI and GMD are more reasonable in interrill soil erodibility estimation, compared with Ki calculation in original WEPP model equation. Results demonstrate the validation of soil aggregation characterization as an appropriate indicator of soil susceptibility to erosion in contrasting soil types in China.展开更多
Cloud computing is very useful for big data owner who doesn't want to manage IT infrastructure and big data technique details. However, it is hard for big data owner to trust multi-layer outsourced big data system...Cloud computing is very useful for big data owner who doesn't want to manage IT infrastructure and big data technique details. However, it is hard for big data owner to trust multi-layer outsourced big data system in cloud environment and to verify which outsourced service leads to the problem. Similarly, the cloud service provider cannot simply trust the data computation applications. At last,the verification data itself may also leak the sensitive information from the cloud service provider and data owner. We propose a new three-level definition of the verification, threat model, corresponding trusted policies based on different roles for outsourced big data system in cloud. We also provide two policy enforcement methods for building trusted data computation environment by measuring both the Map Reduce application and its behaviors based on trusted computing and aspect-oriented programming. To prevent sensitive information leakage from verification process,we provide a privacy-preserved verification method. Finally, we implement the TPTVer, a Trusted third Party based Trusted Verifier as a proof of concept system. Our evaluation and analysis show that TPTVer can provide trusted verification for multi-layered outsourced big data system in the cloud with low overhead.展开更多
Emodin is a kind of anthraquiones with pharmaceutical activities.The solubilities of emodin in ethanol,1-octanol,and ethanol+1-octanol at different temperatures were measured using an analytical method.The solubility ...Emodin is a kind of anthraquiones with pharmaceutical activities.The solubilities of emodin in ethanol,1-octanol,and ethanol+1-octanol at different temperatures were measured using an analytical method.The solubility of emodin in these solvents increased with an increase in temperature.With the temperature deviating from room temperature,the dependence of the solubility of emodin on temperature became a lot more remarkable.The solubility of emodin in ethanol was more sensitive to temperature than that in 1-octanol.The solubility data of emodin in the same species of solvent at different temperatures was correlated with an empirical equation.The calculated results agreed with the experimental data well.展开更多
Information analysis of high dimensional data was carried out through similarity measure application. High dimensional data were considered as the a typical structure. Additionally, overlapped and non-overlapped data ...Information analysis of high dimensional data was carried out through similarity measure application. High dimensional data were considered as the a typical structure. Additionally, overlapped and non-overlapped data were introduced, and similarity measure analysis was also illustrated and compared with conventional similarity measure. As a result, overlapped data comparison was possible to present similarity with conventional similarity measure. Non-overlapped data similarity analysis provided the clue to solve the similarity of high dimensional data. Considering high dimensional data analysis was designed with consideration of neighborhoods information. Conservative and strict solutions were proposed. Proposed similarity measure was applied to express financial fraud among multi dimensional datasets. In illustrative example, financial fraud similarity with respect to age, gender, qualification and job was presented. And with the proposed similarity measure, high dimensional personal data were calculated to evaluate how similar to the financial fraud. Calculation results show that the actual fraud has rather high similarity measure compared to the average, from minimal 0.0609 to maximal 0.1667.展开更多
Recently a new clustering algorithm called 'affinity propagation' (AP) has been proposed, which efficiently clustered sparsely related data by passing messages between data points. However, we want to cluster ...Recently a new clustering algorithm called 'affinity propagation' (AP) has been proposed, which efficiently clustered sparsely related data by passing messages between data points. However, we want to cluster large scale data where the similarities are not sparse in many cases. This paper presents two variants of AP for grouping large scale data with a dense similarity matrix. The local approach is partition affinity propagation (PAP) and the global method is landmark affinity propagation (LAP). PAP passes messages in the subsets of data first and then merges them as the number of initial step of iterations; it can effectively reduce the number of iterations of clustering. LAP passes messages between the landmark data points first and then clusters non-landmark data points; it is a large global approximation method to speed up clustering. Experiments are conducted on many datasets, such as random data points, manifold subspaces, images of faces and Chinese calligraphy, and the results demonstrate that the two ap-proaches are feasible and practicable.展开更多
Outlier in one variable will smear the estimation of other measurements in data reconciliation (DR). In this article, a novel robust method is proposed for nonlinear dynamic data reconciliation, to reduce the influe...Outlier in one variable will smear the estimation of other measurements in data reconciliation (DR). In this article, a novel robust method is proposed for nonlinear dynamic data reconciliation, to reduce the influence of outliers on the result of DR. This method introduces a penalty function matrix in a conventional least-square objective function, to assign small weights for outliers and large weights for normal measurements. To avoid the loss of data information, element-wise Mahalanobis distance is proposed, as an improvement on vector-wise distance, to construct a penalty function matrix. The correlation of measurement error is also considered in this article. The method introduces the robust statistical theory into conventional least square estimator by constructing the penalty weight matrix and gets not only good robustness but also simple calculation. Simulation of a continuous stirred tank reactor, verifies the effectiveness of the proposed algorithm.展开更多
文摘The theory and its method of machining parameter optimization for high-speed machining are studied. The machining data collected from workshops, labs and references are analyzed. An optimization method based on the genetic algorithm (GA) is investigated. Its calculation speed is faster than that of traditional optimization methods, and it is suitable for the machining parameter optimization in the automatic manufacturing system. Based on the theoretical studies, a system of machining parameter management and optimization is developed. The system can improve productivity of the high-speed machining centers.
文摘The security-related problem during the data exchange is not considered in the SyncML protocol. To solve this problem, SyncML is enhanced with a secure data synchronization exchange service application program interface (SDSXS-API) to ensure the reliability, the integrity, and the security in the data transmission and exchange. The design and the implementation of SDSXS-API are also given. The proposed APIs can be conveniently used as a uniform exchange interface for any related application programs. And their effectiveness is verified in the prototype mobile database system.
文摘An attempt has been made to develop a distributed software infrastructure model for onboard data fusion system simulation, which is also applied to netted radar systems, onboard distributed detection systems and advanced C3I systems. Two architectures are provided and verified: one is based on pure TCP/IP protocol and C/S model, and implemented with Winsock, the other is based on CORBA (common object request broker architecture). The performance of data fusion simulation system, i.e. reliability, flexibility and scalability, is improved and enhanced by two models. The study of them makes valuable explore on incorporating the distributed computation concepts into radar system simulation techniques.
文摘An object oriented data modelling in computer aided design (CAD) databases is focused. Starting with the discussion of data modelling requirements for CAD applications, appropriate data modelling features are introduced herewith. A feasible approach to select the “best” data model for an application is to analyze the data which has to be stored in the database. A data model is appropriate for modelling a given task if the information of the application environment can be easily mapped to the data model. Thus, the involved data are analyzed and then object oriented data model appropriate for CAD applications are derived. Based on the reviewed object oriented techniques applied in CAD, object oriented data modelling in CAD is addressed in details. At last 3D geometrical data models and implementation of their data model using the object oriented method are presented.
基金supported by the Key Program of the National Natural Science Foundation of China (No. 41330960)the Global Change Research Program of China (No. 2015CB953900)
文摘In order to study the temporal variations of correlations between two time series,a running correlation coefficient(RCC)could be used.An RCC is calculated for a given time window,and the window is then moved sequentially through time.The current calculation method for RCCs is based on the general definition of the Pearson product-moment correlation coefficient,calculated with the data within the time window,which we call the local running correlation coefficient(LRCC).The LRCC is calculated via the two anomalies corresponding to the two local means,meanwhile,the local means also vary.It is cleared up that the LRCC reflects only the correlation between the two anomalies within the time window but fails to exhibit the contributions of the two varying means.To address this problem,two unchanged means obtained from all available data are adopted to calculate an RCC,which is called the synthetic running correlation coefficient(SRCC).When the anomaly variations are dominant,the two RCCs are similar.However,when the variations of the means are dominant,the difference between the two RCCs becomes obvious.The SRCC reflects the correlations of both the anomaly variations and the variations of the means.Therefore,the SRCCs from different time points are intercomparable.A criterion for the superiority of the RCC algorithm is that the average value of the RCC should be close to the global correlation coefficient calculated using all data.The SRCC always meets this criterion,while the LRCC sometimes fails.Therefore,the SRCC is better than the LRCC for running correlations.We suggest using the SRCC to calculate the RCCs.
基金supported by the National Natural Science Foundation of China(Grant Nos.41271303,40901135)the National Key Technology R&D Program(Grant Nos.2012BAK10B04,2008BAD98B02)+2 种基金the Non-profit Industry Financial Program of MWR(Grant No.201301058)the Changjiang River Scientific Research Institute of Sciences Innovation Team Project(Grant No.CKSF2012052/TB)Central public welfare scientific research project(Grant No.CKSF2013013/TB)
文摘Aggregate stability is a very important predictor of soil structure and strength, which influences soil erodibility. Several aggregate stability indices were selected erodibility of four soil properties from temperate for estimating interrill types with contrasting and subtropical regions of China. This study was conducted to investigate how closely the soil interrill erodibility factor in the Water Erosion Prediction Project (WEPP) model relates to soil aggregate stability. The mass fractal dimension (FD), geometric mean diameter (GMD), mean weight diameter (MWD), and aggregate stability index (ASI) of soil aggregates were calculated. A rainfall simulator with a drainable flume (3.0 m long × 1.0 m wide × 0.5 m deep) was used at four slope gradients (5°,10 °,15° and 20°), and four rainfall intensities (0.6, 1.1, 1.7 and 2.5 mm/min). Results indicated that the interriU erodibility (Ki) values were significantly correlated to the indices of ASI, MWD, GMD, and FD computed from the aggregate wet-sieve data. The Kihad a strong positive correlation with FD, as well as a strong negative correlation with ASI, GMD, and MWD. Soils with a higher aggregate stability and lower fractal dimension have smaller Ki values. Stable soils were characterized by a high percentage of large aggregates and the erodible soils by a high percentage of smaller aggregates. The correlation coefficients of Ki with ASI and GMD were greater than those with FD and MWD, implying that both the ASI and GMD may be better alternative parameters for empirically predicting the soil Ki factor. ASI and GMD are more reasonable in interrill soil erodibility estimation, compared with Ki calculation in original WEPP model equation. Results demonstrate the validation of soil aggregation characterization as an appropriate indicator of soil susceptibility to erosion in contrasting soil types in China.
基金partially supported by grants from the China 863 High-tech Program (Grant No. 2015AA016002)the Specialized Research Fund for the Doctoral Program of Higher Education (Grant No. 20131103120001)+2 种基金the National Key Research and Development Program of China (Grant No. 2016YFB0800204)the National Science Foundation of China (No. 61502017)the Scientific Research Common Program of Beijing Municipal Commission of Education (KM201710005024)
文摘Cloud computing is very useful for big data owner who doesn't want to manage IT infrastructure and big data technique details. However, it is hard for big data owner to trust multi-layer outsourced big data system in cloud environment and to verify which outsourced service leads to the problem. Similarly, the cloud service provider cannot simply trust the data computation applications. At last,the verification data itself may also leak the sensitive information from the cloud service provider and data owner. We propose a new three-level definition of the verification, threat model, corresponding trusted policies based on different roles for outsourced big data system in cloud. We also provide two policy enforcement methods for building trusted data computation environment by measuring both the Map Reduce application and its behaviors based on trusted computing and aspect-oriented programming. To prevent sensitive information leakage from verification process,we provide a privacy-preserved verification method. Finally, we implement the TPTVer, a Trusted third Party based Trusted Verifier as a proof of concept system. Our evaluation and analysis show that TPTVer can provide trusted verification for multi-layered outsourced big data system in the cloud with low overhead.
基金Supported by the National Natural Science Foundation of China (20406008).
文摘Emodin is a kind of anthraquiones with pharmaceutical activities.The solubilities of emodin in ethanol,1-octanol,and ethanol+1-octanol at different temperatures were measured using an analytical method.The solubility of emodin in these solvents increased with an increase in temperature.With the temperature deviating from room temperature,the dependence of the solubility of emodin on temperature became a lot more remarkable.The solubility of emodin in ethanol was more sensitive to temperature than that in 1-octanol.The solubility data of emodin in the same species of solvent at different temperatures was correlated with an empirical equation.The calculated results agreed with the experimental data well.
基金Project(RDF 11-02-03)supported by the Research Development Fund of XJTLU,China
文摘Information analysis of high dimensional data was carried out through similarity measure application. High dimensional data were considered as the a typical structure. Additionally, overlapped and non-overlapped data were introduced, and similarity measure analysis was also illustrated and compared with conventional similarity measure. As a result, overlapped data comparison was possible to present similarity with conventional similarity measure. Non-overlapped data similarity analysis provided the clue to solve the similarity of high dimensional data. Considering high dimensional data analysis was designed with consideration of neighborhoods information. Conservative and strict solutions were proposed. Proposed similarity measure was applied to express financial fraud among multi dimensional datasets. In illustrative example, financial fraud similarity with respect to age, gender, qualification and job was presented. And with the proposed similarity measure, high dimensional personal data were calculated to evaluate how similar to the financial fraud. Calculation results show that the actual fraud has rather high similarity measure compared to the average, from minimal 0.0609 to maximal 0.1667.
基金the National Natural Science Foundation of China (Nos. 60533090 and 60603096)the National Hi-Tech Research and Development Program (863) of China (No. 2006AA010107)+2 种基金the Key Technology R&D Program of China (No. 2006BAH02A13-4)the Program for Changjiang Scholars and Innovative Research Team in University of China (No. IRT0652)the Cultivation Fund of the Key Scientific and Technical Innovation Project of MOE, China (No. 706033)
文摘Recently a new clustering algorithm called 'affinity propagation' (AP) has been proposed, which efficiently clustered sparsely related data by passing messages between data points. However, we want to cluster large scale data where the similarities are not sparse in many cases. This paper presents two variants of AP for grouping large scale data with a dense similarity matrix. The local approach is partition affinity propagation (PAP) and the global method is landmark affinity propagation (LAP). PAP passes messages in the subsets of data first and then merges them as the number of initial step of iterations; it can effectively reduce the number of iterations of clustering. LAP passes messages between the landmark data points first and then clusters non-landmark data points; it is a large global approximation method to speed up clustering. Experiments are conducted on many datasets, such as random data points, manifold subspaces, images of faces and Chinese calligraphy, and the results demonstrate that the two ap-proaches are feasible and practicable.
基金Supported by the National Natural Science Foundation of China (No.60504033)
文摘Outlier in one variable will smear the estimation of other measurements in data reconciliation (DR). In this article, a novel robust method is proposed for nonlinear dynamic data reconciliation, to reduce the influence of outliers on the result of DR. This method introduces a penalty function matrix in a conventional least-square objective function, to assign small weights for outliers and large weights for normal measurements. To avoid the loss of data information, element-wise Mahalanobis distance is proposed, as an improvement on vector-wise distance, to construct a penalty function matrix. The correlation of measurement error is also considered in this article. The method introduces the robust statistical theory into conventional least square estimator by constructing the penalty weight matrix and gets not only good robustness but also simple calculation. Simulation of a continuous stirred tank reactor, verifies the effectiveness of the proposed algorithm.