期刊文献+
共找到3,360篇文章
< 1 2 168 >
每页显示 20 50 100
Optimal Estimation of High-Dimensional Covariance Matrices with Missing and Noisy Data
1
作者 Meiyin Wang Wanzhou Ye 《Advances in Pure Mathematics》 2024年第4期214-227,共14页
The estimation of covariance matrices is very important in many fields, such as statistics. In real applications, data are frequently influenced by high dimensions and noise. However, most relevant studies are based o... The estimation of covariance matrices is very important in many fields, such as statistics. In real applications, data are frequently influenced by high dimensions and noise. However, most relevant studies are based on complete data. This paper studies the optimal estimation of high-dimensional covariance matrices based on missing and noisy sample under the norm. First, the model with sub-Gaussian additive noise is presented. The generalized sample covariance is then modified to define a hard thresholding estimator , and the minimax upper bound is derived. After that, the minimax lower bound is derived, and it is concluded that the estimator presented in this article is rate-optimal. Finally, numerical simulation analysis is performed. The result shows that for missing samples with sub-Gaussian noise, if the true covariance matrix is sparse, the hard thresholding estimator outperforms the traditional estimate method. 展开更多
关键词 High-dimensional Covariance Matrix Missing data Sub-Gaussian Noise Optimal Estimation
下载PDF
CABOSFV algorithm for high dimensional sparse data clustering 被引量:7
2
作者 Sen Wu Xuedong Gao Management School, University of Science and Technology Beijing, Beijing 100083, China 《Journal of University of Science and Technology Beijing》 CSCD 2004年第3期283-288,共6页
An algorithm, Clustering Algorithm Based On Sparse Feature Vector (CABOSFV),was proposed for the high dimensional clustering of binary sparse data. This algorithm compressesthe data effectively by using a tool 'Sp... An algorithm, Clustering Algorithm Based On Sparse Feature Vector (CABOSFV),was proposed for the high dimensional clustering of binary sparse data. This algorithm compressesthe data effectively by using a tool 'Sparse Feature Vector', thus reduces the data scaleenormously, and can get the clustering result with only one data scan. Both theoretical analysis andempirical tests showed that CABOSFV is of low computational complexity. The algorithm findsclusters in high dimensional large datasets efficiently and handles noise effectively. 展开更多
关键词 CLUSTERING data mining SPARSE high dimensionality
下载PDF
Similarity measurement method of high-dimensional data based on normalized net lattice subspace 被引量:4
3
作者 李文法 Wang Gongming +1 位作者 Li Ke Huang Su 《High Technology Letters》 EI CAS 2017年第2期179-184,共6页
The performance of conventional similarity measurement methods is affected seriously by the curse of dimensionality of high-dimensional data.The reason is that data difference between sparse and noisy dimensionalities... The performance of conventional similarity measurement methods is affected seriously by the curse of dimensionality of high-dimensional data.The reason is that data difference between sparse and noisy dimensionalities occupies a large proportion of the similarity,leading to the dissimilarities between any results.A similarity measurement method of high-dimensional data based on normalized net lattice subspace is proposed.The data range of each dimension is divided into several intervals,and the components in different dimensions are mapped onto the corresponding interval.Only the component in the same or adjacent interval is used to calculate the similarity.To validate this method,three data types are used,and seven common similarity measurement methods are compared.The experimental result indicates that the relative difference of the method is increasing with the dimensionality and is approximately two or three orders of magnitude higher than the conventional method.In addition,the similarity range of this method in different dimensions is [0,1],which is fit for similarity analysis after dimensionality reduction. 展开更多
关键词 high-dimensional data the curse of dimensionality SIMILARITY NORMALIZATION SUBSPACE NPsim
下载PDF
Nonlinear Dimensionality Reduction and Data Visualization:A Review 被引量:4
4
作者 Hujun Yin 《International Journal of Automation and computing》 EI 2007年第3期294-303,共10页
Dimensionality reduction and data visualization are useful and important processes in pattern recognition. Many techniques have been developed in the recent years. The self-organizing map (SOM) can be an efficient m... Dimensionality reduction and data visualization are useful and important processes in pattern recognition. Many techniques have been developed in the recent years. The self-organizing map (SOM) can be an efficient method for this purpose. This paper reviews recent advances in this area and related approaches such as multidimensional scaling (MDS), nonlinear PC A, principal manifolds, as well as the connections of the SOM and its recent variant, the visualization induced SOM (ViSOM), with these approaches. The SOM is shown to produce a quantized, qualitative scaling and while the ViSOM a quantitative or metric scaling and approximates principal curve/surface. The SOM can also be regarded as a generalized MDS to relate two metric spaces by forming a topological mapping between them. The relationships among various recently proposed techniques such as ViSOM, Isomap, LLE, and eigenmap are discussed and compared. 展开更多
关键词 dimensionality reduction nonlinear data projection multidimensional scaling self-organizing maps nonlinear PCA principal manifold
下载PDF
Similarity measure design for high dimensional data 被引量:3
5
作者 LEE Sang-hyuk YAN Sun +1 位作者 JEONG Yoon-su SHIN Seung-soo 《Journal of Central South University》 SCIE EI CAS 2014年第9期3534-3540,共7页
Information analysis of high dimensional data was carried out through similarity measure application. High dimensional data were considered as the a typical structure. Additionally, overlapped and non-overlapped data ... Information analysis of high dimensional data was carried out through similarity measure application. High dimensional data were considered as the a typical structure. Additionally, overlapped and non-overlapped data were introduced, and similarity measure analysis was also illustrated and compared with conventional similarity measure. As a result, overlapped data comparison was possible to present similarity with conventional similarity measure. Non-overlapped data similarity analysis provided the clue to solve the similarity of high dimensional data. Considering high dimensional data analysis was designed with consideration of neighborhoods information. Conservative and strict solutions were proposed. Proposed similarity measure was applied to express financial fraud among multi dimensional datasets. In illustrative example, financial fraud similarity with respect to age, gender, qualification and job was presented. And with the proposed similarity measure, high dimensional personal data were calculated to evaluate how similar to the financial fraud. Calculation results show that the actual fraud has rather high similarity measure compared to the average, from minimal 0.0609 to maximal 0.1667. 展开更多
关键词 high dimensional data similarity measure DIFFERENCE neighborhood information financial fraud
下载PDF
Seismic data reconstruction based on low dimensional manifold model 被引量:1
6
作者 Nan-Ying Lan Fan-Chang Zhang Xing-Yao Yin 《Petroleum Science》 SCIE CAS CSCD 2022年第2期518-533,共16页
Seismic data reconstruction is an essential and yet fundamental step in seismic data processing workflow,which is of profound significance to improve migration imaging quality,multiple suppression effect,and seismic i... Seismic data reconstruction is an essential and yet fundamental step in seismic data processing workflow,which is of profound significance to improve migration imaging quality,multiple suppression effect,and seismic inversion accuracy.Regularization methods play a central role in solving the underdetermined inverse problem of seismic data reconstruction.In this paper,a novel regularization approach is proposed,the low dimensional manifold model(LDMM),for reconstructing the missing seismic data.Our work relies on the fact that seismic patches always occupy a low dimensional manifold.Specifically,we exploit the dimension of the seismic patches manifold as a regularization term in the reconstruction problem,and reconstruct the missing seismic data by enforcing low dimensionality on this manifold.The crucial procedure of the proposed method is to solve the dimension of the patches manifold.Toward this,we adopt an efficient dimensionality calculation method based on low-rank approximation,which provides a reliable safeguard to enforce the constraints in the reconstruction process.Numerical experiments performed on synthetic and field seismic data demonstrate that,compared with the curvelet-based sparsity-promoting L1-norm minimization method and the multichannel singular spectrum analysis method,the proposed method obtains state-of-the-art reconstruction results. 展开更多
关键词 Seismic data reconstruction Low dimensional manifold model REGULARIZATION Low-rank approximation
下载PDF
Coupling Ensemble Kalman Filter with Four-dimensional Variational Data Assimilation 被引量:26
7
作者 Fuqing ZHANG Meng ZHANG James A. HANSEN 《Advances in Atmospheric Sciences》 SCIE CAS CSCD 2009年第1期1-8,共8页
This study examines the performance of coupling the deterministic four-dimensional variational assimilation system (4DVAR) with an ensemble Kalman filter (EnKF) to produce a superior hybrid approach for data assim... This study examines the performance of coupling the deterministic four-dimensional variational assimilation system (4DVAR) with an ensemble Kalman filter (EnKF) to produce a superior hybrid approach for data assimilation. The coupled assimilation scheme (E4DVAR) benefits from using the state-dependent uncertainty provided by EnKF while taking advantage of 4DVAR in preventing filter divergence: the 4DVAR analysis produces posterior maximum likelihood solutions through minimization of a cost function about which the ensemble perturbations are transformed, and the resulting ensemble analysis can be propagated forward both for the next assimilation cycle and as a basis for ensemble forecasting. The feasibility and effectiveness of this coupled approach are demonstrated in an idealized model with simulated observations. It is found that the E4DVAR is capable of outperforming both 4DVAR and the EnKF under both perfect- and imperfect-model scenarios. The performance of the coupled scheme is also less sensitive to either the ensemble size or the assimilation window length than those for standard EnKF or 4DVAR implementations. 展开更多
关键词 data assimilation four-dimensional variational data assimilation ensemble Kalman filter Lorenz model hybrid method
下载PDF
A theoretical study of the multigrid three-dimensional variational data assimilation scheme using a simple bilinear interpolation algorithm 被引量:5
8
作者 LI Wei XIE Yuanfu HAN Guijun 《Acta Oceanologica Sinica》 SCIE CAS CSCD 2013年第3期80-87,共8页
In order to solve the so-called "bull-eye" problem caused by using a simple bilinear interpolation as an observational mapping operator in the cost function in the multigrid three-dimensional variational (3DVAR) d... In order to solve the so-called "bull-eye" problem caused by using a simple bilinear interpolation as an observational mapping operator in the cost function in the multigrid three-dimensional variational (3DVAR) data assimilation scheme, a smoothing term, equivalent to a penalty term, is introduced into the cost function to serve as a means of troubleshooting. A theoretical analysis is first performed to figure out what on earth results in the issue of "bull-eye", and then the meaning of such smoothing term is elucidated and the uniqueness of solution of the multigrid 3DVAR with the smoothing term added is discussed through the theoretical deduction for one-dimensional (1D) case, and two idealized data assimilation experiments (one- and two-dimensional (2D) cases). By exploring the relationship between the smoothing term and the recursive filter theoretically and practically, it is revealed why satisfied analysis results can be achieved by using such proposed solution for the issue of the multigrid 3DVAR. 展开更多
关键词 MULTIGRID three-dimensional variational data assimilation bilinear interpolation
下载PDF
A 3-DIMENSIONAL DATA MODEL FOR VISUALIZING CLOVERLEAF JUNCTION IN A CITY MODEL 被引量:6
9
作者 Chen Jun Sun Min Zhou Qiming 《Geo-Spatial Information Science》 1999年第1期9-15,共7页
The research work has been seldom done about cloverleaf junction expression in a 3-dimensional city model (3DCM). The main reason is that the cloverleaf junction is often in a complex and enormous construction. Its ma... The research work has been seldom done about cloverleaf junction expression in a 3-dimensional city model (3DCM). The main reason is that the cloverleaf junction is often in a complex and enormous construction. Its main body is bestraddle in air,and has aerial intersections between its parts. This complex feature made cloverleaf junction quite different from buildings and terrain, therefore, it is difficult to express this kind of spatial objects in the same way as for buildings and terrain. In this paper,authors analyze spatial characteristics of cloverleaf junction, propose an all-constraint points TIN algorithm to partition cloverleaf junction road surface, and develop a method to visualize cloverleaf junction road surface using TIN. In order to manage cloverleaf junction data efficiently, the authors also analyzed the mechanism of 3DCM data management, extended BLOB type in relational database, and combined R-tree index to manage 3D spatial data. Based on this extension, an appropriate data 展开更多
关键词 3-dimensional CITY model (3DCM) GIS cloverleaf JUNCTION data STRUCTURE dataBASE
下载PDF
Testing a Four-Dimensional Variational Data Assimilation Method Using an Improved Intermediate Coupled Model for ENSO Analysis and Prediction 被引量:10
10
作者 Chuan GAO Xinrong WU Rong-Hua ZHANG 《Advances in Atmospheric Sciences》 SCIE CAS CSCD 2016年第7期875-888,共14页
A four-dimensional variational (4D-Var) data assimilation method is implemented in an improved intermediate coupled model (ICM) of the tropical Pacific. A twin experiment is designed to evaluate the impact of the ... A four-dimensional variational (4D-Var) data assimilation method is implemented in an improved intermediate coupled model (ICM) of the tropical Pacific. A twin experiment is designed to evaluate the impact of the 4D-Var data assimilation algorithm on ENSO analysis and prediction based on the ICM. The model error is assumed to arise only from the parameter uncertainty. The "observation" of the SST anomaly, which is sampled from a "truth" model simulation that takes default parameter values and has Gaussian noise added, is directly assimilated into the assimilation model with its parameters set erroneously. Results show that 4D-Var effectively reduces the error of ENSO analysis and therefore improves the prediction skill of ENSO events compared with the non-assimilation case. These results provide a promising way for the ICM to achieve better real-time ENSO prediction. 展开更多
关键词 Four-dimensional variational data assimilation intermediate coupled model twin experiment ENSO prediction
下载PDF
A DESIGN OF THREE-DIMENSIONAL SPATIAL DATA MODEL AND ITS DATA STRUCTURE IN GEOLOGICAL EXPLORATION ENGINEERING
11
作者 Cheng Penggen Gong Jianya +1 位作者 Wang Yandong Sui Haigang 《Geo-Spatial Information Science》 1999年第1期78-85,共8页
The key to develop 3-D GISs is the study on 3-D data model and data structure. Some of the data models and data structures have been presented by scholars. Because of the complexity of 3-D spatial phenomenon, there ar... The key to develop 3-D GISs is the study on 3-D data model and data structure. Some of the data models and data structures have been presented by scholars. Because of the complexity of 3-D spatial phenomenon, there are no perfect data structures that can describe all spatial entities. Every data structure has its own advantages and disadvantages. It is difficult to design a single data structure to meet different needs. The important subject in the3-D data models is developing a data model that has integrated vector and raster data structures. A special 3-D spatial data model based on distributing features of spatial entities should be designed. We took the geological exploration engineering as the research background and designed an integrated data model whose data structures integrats vector and raster data byadopting object-oriented technique. Research achievements are presented in this paper. 展开更多
关键词 GEOLOGICAL EXPLORATION ENGINEERING GEOGRAPHIC information system three dimensional data model data structure
下载PDF
Subspace Clustering in High-Dimensional Data Streams:A Systematic Literature Review
12
作者 Nur Laila Ab Ghani Izzatdin Abdul Aziz Said Jadid AbdulKadir 《Computers, Materials & Continua》 SCIE EI 2023年第5期4649-4668,共20页
Clustering high dimensional data is challenging as data dimensionality increases the distance between data points,resulting in sparse regions that degrade clustering performance.Subspace clustering is a common approac... Clustering high dimensional data is challenging as data dimensionality increases the distance between data points,resulting in sparse regions that degrade clustering performance.Subspace clustering is a common approach for processing high-dimensional data by finding relevant features for each cluster in the data space.Subspace clustering methods extend traditional clustering to account for the constraints imposed by data streams.Data streams are not only high-dimensional,but also unbounded and evolving.This necessitates the development of subspace clustering algorithms that can handle high dimensionality and adapt to the unique characteristics of data streams.Although many articles have contributed to the literature review on data stream clustering,there is currently no specific review on subspace clustering algorithms in high-dimensional data streams.Therefore,this article aims to systematically review the existing literature on subspace clustering of data streams in high-dimensional streaming environments.The review follows a systematic methodological approach and includes 18 articles for the final analysis.The analysis focused on two research questions related to the general clustering process and dealing with the unbounded and evolving characteristics of data streams.The main findings relate to six elements:clustering process,cluster search,subspace search,synopsis structure,cluster maintenance,and evaluation measures.Most algorithms use a two-phase clustering approach consisting of an initialization stage,a refinement stage,a cluster maintenance stage,and a final clustering stage.The density-based top-down subspace clustering approach is more widely used than the others because it is able to distinguish true clusters and outliers using projected microclusters.Most algorithms implicitly adapt to the evolving nature of the data stream by using a time fading function that is sensitive to outliers.Future work can focus on the clustering framework,parameter optimization,subspace search techniques,memory-efficient synopsis structures,explicit cluster change detection,and intrinsic performance metrics.This article can serve as a guide for researchers interested in high-dimensional subspace clustering methods for data streams. 展开更多
关键词 CLUSTERING subspace clustering projected clustering data stream stream clustering high dimensionality evolving data stream concept drift
下载PDF
Multi-dimensional database design and implementation of dam safety monitoring system 被引量:1
13
作者 Zhao Erfeng Wang Yachao +2 位作者 Jiang Yufeng Zhang Lei Yu Hong 《Water Science and Engineering》 EI CAS 2008年第3期112-120,共9页
To improve the effectiveness of dam safety monitoring database systems, the development process of a multi-dimensional conceptual data model was analyzed and a logic design wasachieved in multi-dimensional database mo... To improve the effectiveness of dam safety monitoring database systems, the development process of a multi-dimensional conceptual data model was analyzed and a logic design wasachieved in multi-dimensional database mode. The optimal data model was confirmed by identifying data objects, defining relations and reviewing entities. The conversion of relations among entities to external keys and entities and physical attributes to tables and fields was interpreted completely. On this basis, a multi-dimensional database that reflects the management and analysis of a dam safety monitoring system on monitoring data information has been established, for which factual tables and dimensional tables have been designed. Finally, based on service design and user interface design, the dam safety monitoring system has been developed with Delphi as the development tool. This development project shows that the multi-dimensional database can simplify the development process and minimize hidden dangers in the database structure design. It is superior to other dam safety monitoring system development models and can provide a new research direction for system developers. 展开更多
关键词 dam safety multi-dimensional database conceptual data model database mode monitoring system
下载PDF
Augmented Industrial Data-Driven Modeling Under the Curse of Dimensionality
14
作者 Xiaoyu Jiang Xiangyin Kong Zhiqiang Ge 《IEEE/CAA Journal of Automatica Sinica》 SCIE EI CSCD 2023年第6期1445-1461,共17页
The curse of dimensionality refers to the problem o increased sparsity and computational complexity when dealing with high-dimensional data.In recent years,the types and vari ables of industrial data have increased si... The curse of dimensionality refers to the problem o increased sparsity and computational complexity when dealing with high-dimensional data.In recent years,the types and vari ables of industrial data have increased significantly,making data driven models more challenging to develop.To address this prob lem,data augmentation technology has been introduced as an effective tool to solve the sparsity problem of high-dimensiona industrial data.This paper systematically explores and discusses the necessity,feasibility,and effectiveness of augmented indus trial data-driven modeling in the context of the curse of dimen sionality and virtual big data.Then,the process of data augmen tation modeling is analyzed,and the concept of data boosting augmentation is proposed.The data boosting augmentation involves designing the reliability weight and actual-virtual weigh functions,and developing a double weighted partial least squares model to optimize the three stages of data generation,data fusion and modeling.This approach significantly improves the inter pretability,effectiveness,and practicality of data augmentation in the industrial modeling.Finally,the proposed method is verified using practical examples of fault diagnosis systems and virtua measurement systems in the industry.The results demonstrate the effectiveness of the proposed approach in improving the accu racy and robustness of data-driven models,making them more suitable for real-world industrial applications. 展开更多
关键词 Index Terms—Curse of dimensionality data augmentation data-driven modeling industrial processes machine learning
下载PDF
Goodness-of-fit tests for multi-dimensional copulas:Expanding application to historical drought data 被引量:2
15
作者 Ming-wei MA Li-liang REN +2 位作者 Song-bai SONG Jia-li SONG Shan-hu JIANG 《Water Science and Engineering》 EI CAS CSCD 2013年第1期18-30,共13页
The question of how to choose a copula model that best fits a given dataset is a predominant limitation of the copula approach, and the present study aims to investigate the techniques of goodness-of-fit tests for mul... The question of how to choose a copula model that best fits a given dataset is a predominant limitation of the copula approach, and the present study aims to investigate the techniques of goodness-of-fit tests for multi-dimensional copulas. A goodness-of-fit test based on Rosenblatt's transformation was mathematically expanded from two dimensions to three dimensions and procedures of a bootstrap version of the test were provided. Through stochastic copula simulation, an empirical application of historical drought data at the Lintong Gauge Station shows that the goodness-of-fit tests perform well, revealing that both trivariate Gaussian and Student t copulas are acceptable for modeling the dependence structures of the observed drought duration, severity, and peak. The goodness-of-fit tests for multi-dimensional copulas can provide further support and help a lot in the potential applications of a wider range of copulas to describe the associations of correlated hydrological variables. However, for the application of copulas with the number of dimensions larger than three, more complicated computational efforts as well as exploration and parameterization of corresponding copulas are required. 展开更多
关键词 goodness-of-fit test multi-dimensional copulas stochastic simulation Rosenblatt'stransformation bootstrap approach drought data
下载PDF
Multidimensional Data Querying on Tree-Structured Overlay
16
作者 XU Lizhen WANG Shiyuan 《Wuhan University Journal of Natural Sciences》 CAS 2006年第5期1367-1372,共6页
Multidimensional data query has been gaining much interest in database research communities in recent years, yet many of the existing studies focus mainly on ten tralized systems. A solution to querying in Peer-to-Pee... Multidimensional data query has been gaining much interest in database research communities in recent years, yet many of the existing studies focus mainly on ten tralized systems. A solution to querying in Peer-to-Peer(P2P) environment was proposed to achieve both low processing cost in terms of the number of peers accessed and search messages and balanced query loads among peers. The system is based on a balanced tree structured P2P network. By partitioning the query space intelligently, the amount of query forwarding is effectively controlled, and the number of peers involved and search messages are also limited. Dynamic load balancing can be achieved during space partitioning and query resolving. Extensive experiments confirm the effectiveness and scalability of our algorithms on P2P networks. 展开更多
关键词 range query skyline query P2P indexing multi-dimensional data partition
下载PDF
Dimensionality Reduction of High-Dimensional Highly Correlated Multivariate Grapevine Dataset
17
作者 Uday Kant Jha Peter Bajorski +3 位作者 Ernest Fokoue Justine Vanden Heuvel Jan van Aardt Grant Anderson 《Open Journal of Statistics》 2017年第4期702-717,共16页
Viticulturists traditionally have a keen interest in studying the relationship between the biochemistry of grapevines’ leaves/petioles and their associated spectral reflectance in order to understand the fruit ripeni... Viticulturists traditionally have a keen interest in studying the relationship between the biochemistry of grapevines’ leaves/petioles and their associated spectral reflectance in order to understand the fruit ripening rate, water status, nutrient levels, and disease risk. In this paper, we implement imaging spectroscopy (hyperspectral) reflectance data, for the reflective 330 - 2510 nm wavelength region (986 total spectral bands), to assess vineyard nutrient status;this constitutes a high dimensional dataset with a covariance matrix that is ill-conditioned. The identification of the variables (wavelength bands) that contribute useful information for nutrient assessment and prediction, plays a pivotal role in multivariate statistical modeling. In recent years, researchers have successfully developed many continuous, nearly unbiased, sparse and accurate variable selection methods to overcome this problem. This paper compares four regularized and one functional regression methods: Elastic Net, Multi-Step Adaptive Elastic Net, Minimax Concave Penalty, iterative Sure Independence Screening, and Functional Data Analysis for wavelength variable selection. Thereafter, the predictive performance of these regularized sparse models is enhanced using the stepwise regression. This comparative study of regression methods using a high-dimensional and highly correlated grapevine hyperspectral dataset revealed that the performance of Elastic Net for variable selection yields the best predictive ability. 展开更多
关键词 HIGH-dimensional data MULTI-STEP Adaptive Elastic Net MINIMAX CONCAVE Penalty Sure Independence Screening Functional data Analysis
下载PDF
VISUALIZATION OF THREE-DIMENSIONAL DATA FIELD AND ITS APPLICATION IN MACHINE TESTING
18
作者 YIN Aijun QIN Shuren TANG Baoping 《Chinese Journal of Mechanical Engineering》 SCIE EI CAS CSCD 2006年第1期81-84,共4页
In order to realize visualization of three-dimensional data field (TDDF) in instrument, two methods of visualization of TDDF and the usual manner of quick graphic and image processing are analyzed. And how to use Op... In order to realize visualization of three-dimensional data field (TDDF) in instrument, two methods of visualization of TDDF and the usual manner of quick graphic and image processing are analyzed. And how to use OpenGL technique and the characteristic of analyzed data to construct a TDDF, the ways of reality processing and interactive processing are described. Then the medium geometric element and a related realistic model are constructed by means of the first algorithm. Models obtained for attaching the third dimension in three-dimensional data field are presented. An example for TDDF realization of machine measuring is provided. The analysis of resultant graphic indicates that the three-dimensional graphics built by the method developed is featured by good reality, fast processing and strong interaction 展开更多
关键词 Visualization in scientific computing Three-dimensional data field (TDDF) Test
下载PDF
Research of a New Multi-dimensional Dataset Search Framework on Unstructured P2P
19
作者 DENG Hui-min ZENG Bi-qing XIA Xu 《通讯和计算机(中英文版)》 2007年第1期1-7,共7页
关键词 复合型数据 数据挖掘 P2P系统 数据询问
下载PDF
Three-dimensional weighting reconstruction algorithm for circular cone-beam CT under large scan angles 被引量:3
20
作者 Ya-Fei Yang Ding-Hua Zhang +2 位作者 Kui-Dong Huang Fu-Qiang Yang Zong-Zhao Gao 《Nuclear Science and Techniques》 SCIE CAS CSCD 2017年第8期75-82,共8页
Improving imaging quality of cone-beam CT under large cone angle scan has been an important area of CT imaging research. Considering the idea of conjugate rays and making up missing data, we propose a three-dimensiona... Improving imaging quality of cone-beam CT under large cone angle scan has been an important area of CT imaging research. Considering the idea of conjugate rays and making up missing data, we propose a three-dimensional(3D) weighting reconstruction algorithm for cone-beam CT. The 3D weighting function is added in the back-projection process to reduce the axial density drop and improve the accuracy of FDK algorithm. Having a simple structure, the algorithm can be implemented easily without rebinning the native cone-beam data into coneparallel beam data. Performance of the algorithm is evaluated using two computer simulations and a real industrial component, and the results show that the algorithm achieves better performance in reduction of axial intensity drop artifacts and has a wide range of application. 展开更多
关键词 重建算法 锥束CT 三维 角度扫描 加权 性能评价 轴向强度 FDK算法
下载PDF
上一页 1 2 168 下一页 到第
使用帮助 返回顶部