期刊文献+
共找到10,471篇文章
< 1 2 250 >
每页显示 20 50 100
A Hierarchical Method for Locating the Interferometric Fringes of Celestial Sources in the Visibility Data
1
作者 Rong Ma Ruiqing Yan +7 位作者 Hanshuai Cui Xiaochun Cheng Jixia Li Fengquan Wu Zongyao Yin Hao Wang Wenyi Zeng Xianchuan Yu 《Research in Astronomy and Astrophysics》 SCIE CAS CSCD 2024年第3期110-128,共19页
In source detection in the Tianlai project,locating the interferometric fringe in visibility data accurately will influence downstream tasks drastically,such as physical parameter estimation and weak source exploratio... In source detection in the Tianlai project,locating the interferometric fringe in visibility data accurately will influence downstream tasks drastically,such as physical parameter estimation and weak source exploration.Considering that traditional locating methods are time-consuming and supervised methods require a great quantity of expensive labeled data,in this paper,we first investigate characteristics of interferometric fringes in the simulation and real scenario separately,and integrate an almost parameter-free unsupervised clustering method and seeding filling or eraser algorithm to propose a hierarchical plug and play method to improve location accuracy.Then,we apply our method to locate single and multiple sources’interferometric fringes in simulation data.Next,we apply our method to real data taken from the Tianlai radio telescope array.Finally,we compare with unsupervised methods that are state of the art.These results show that our method has robustness in different scenarios and can improve location measurement accuracy effectively. 展开更多
关键词 methods data analysis-techniques image processing-techniques INTERFEROMETRIC
下载PDF
Application of the finite analytic numerical method to a flowdependent variational data assimilation
2
作者 Yan Hu Wei Li +2 位作者 Xuefeng Zhang Guimei Liu Liang Zhang 《Acta Oceanologica Sinica》 SCIE CAS CSCD 2024年第3期30-39,共10页
An anisotropic diffusion filter can be used to model a flow-dependent background error covariance matrix,which can be achieved by solving the advection-diffusion equation.Because of the directionality of the advection... An anisotropic diffusion filter can be used to model a flow-dependent background error covariance matrix,which can be achieved by solving the advection-diffusion equation.Because of the directionality of the advection term,the discrete method needs to be chosen very carefully.The finite analytic method is an alternative scheme to solve the advection-diffusion equation.As a combination of analytical and numerical methods,it not only has high calculation accuracy but also holds the characteristic of the auto upwind.To demonstrate its ability,the one-dimensional steady and unsteady advection-diffusion equation numerical examples are respectively solved by the finite analytic method.The more widely used upwind difference method is used as a control approach.The result indicates that the finite analytic method has higher accuracy than the upwind difference method.For the two-dimensional case,the finite analytic method still has a better performance.In the three-dimensional variational assimilation experiment,the finite analytic method can effectively improve analysis field accuracy,and its effect is significantly better than the upwind difference and the central difference method.Moreover,it is still a more effective solution method in the strong flow region where the advective-diffusion filter performs most prominently. 展开更多
关键词 finite analytic method advection-diffusion equation data assimilation flow-dependent
下载PDF
Artificial Immune Detection for Network Intrusion Data Based on Quantitative Matching Method
3
作者 CaiMing Liu Yan Zhang +1 位作者 Zhihui Hu Chunming Xie 《Computers, Materials & Continua》 SCIE EI 2024年第2期2361-2389,共29页
Artificial immune detection can be used to detect network intrusions in an adaptive approach and proper matching methods can improve the accuracy of immune detection methods.This paper proposes an artificial immune de... Artificial immune detection can be used to detect network intrusions in an adaptive approach and proper matching methods can improve the accuracy of immune detection methods.This paper proposes an artificial immune detection model for network intrusion data based on a quantitative matching method.The proposed model defines the detection process by using network data and decimal values to express features and artificial immune mechanisms are simulated to define immune elements.Then,to improve the accuracy of similarity calculation,a quantitative matching method is proposed.The model uses mathematical methods to train and evolve immune elements,increasing the diversity of immune recognition and allowing for the successful detection of unknown intrusions.The proposed model’s objective is to accurately identify known intrusions and expand the identification of unknown intrusions through signature detection and immune detection,overcoming the disadvantages of traditional methods.The experiment results show that the proposed model can detect intrusions effectively.It has a detection rate of more than 99.6%on average and a false alarm rate of 0.0264%.It outperforms existing immune intrusion detection methods in terms of comprehensive detection performance. 展开更多
关键词 Immune detection network intrusion network data signature detection quantitative matching method
下载PDF
PSRDP:A Parallel Processing Method for Pulsar Baseband Data
4
作者 Ya-Zhou Zhang Hai-Long Zhang +7 位作者 Jie Wang Xin-Chen Ye Shuang-Qiang Wang Xu Du Han Wu Ting Zhang Shao-Cong Guo Meng Zhang 《Research in Astronomy and Astrophysics》 SCIE CAS CSCD 2024年第1期300-310,共11页
To address the problem of real-time processing of ultra-wide bandwidth pulsar baseband data,we designed and implemented a pulsar baseband data processing algorithm(PSRDP)based on GPU parallel computing technology.PSRD... To address the problem of real-time processing of ultra-wide bandwidth pulsar baseband data,we designed and implemented a pulsar baseband data processing algorithm(PSRDP)based on GPU parallel computing technology.PSRDP can perform operations such as baseband data unpacking,channel separation,coherent dedispersion,Stokes detection,phase and folding period prediction,and folding integration in GPU clusters.We tested the algorithm using the J0437-4715 pulsar baseband data generated by the CASPSR and Medusa backends of the Parkes,and the J0332+5434 pulsar baseband data generated by the self-developed backend of the Nan Shan Radio Telescope.We obtained the pulse profiles of each baseband data.Through experimental analysis,we have found that the pulse profiles generated by the PSRDP algorithm in this paper are essentially consistent with the processing results of Digital Signal Processing Software for Pulsar Astronomy(DSPSR),which verified the effectiveness of the PSRDP algorithm.Furthermore,using the same baseband data,we compared the processing speed of PSRDP with DSPSR,and the results showed that PSRDP was not slower than DSPSR in terms of speed.The theoretical and technical experience gained from the PSRDP algorithm research in this article lays a technical foundation for the real-time processing of QTT(Qi Tai radio Telescope)ultra-wide bandwidth pulsar baseband data. 展开更多
关键词 (stars:)pulsars:general methods:data analysis techniques:miscellaneous
下载PDF
Optimized air-ground data fusion method for mine slope modeling
5
作者 LIU Dan HUANG Man +4 位作者 TAO Zhigang HONG Chenjie WU Yuewei FAN En YANG Fei 《Journal of Mountain Science》 SCIE CSCD 2024年第6期2130-2139,共10页
Refined 3D modeling of mine slopes is pivotal for precise prediction of geological hazards.Aiming at the inadequacy of existing single modeling methods in comprehensively representing the overall and localized charact... Refined 3D modeling of mine slopes is pivotal for precise prediction of geological hazards.Aiming at the inadequacy of existing single modeling methods in comprehensively representing the overall and localized characteristics of mining slopes,this study introduces a new method that fuses model data from Unmanned aerial vehicles(UAV)tilt photogrammetry and 3D laser scanning through a data alignment algorithm based on control points.First,the mini batch K-Medoids algorithm is utilized to cluster the point cloud data from ground 3D laser scanning.Then,the elbow rule is applied to determine the optimal cluster number(K0),and the feature points are extracted.Next,the nearest neighbor point algorithm is employed to match the feature points obtained from UAV tilt photogrammetry,and the internal point coordinates are adjusted through the distanceweighted average to construct a 3D model.Finally,by integrating an engineering case study,the K0 value is determined to be 8,with a matching accuracy between the two model datasets ranging from 0.0669 to 1.0373 mm.Therefore,compared with the modeling method utilizing K-medoids clustering algorithm,the new modeling method significantly enhances the computational efficiency,the accuracy of selecting the optimal number of feature points in 3D laser scanning,and the precision of the 3D model derived from UAV tilt photogrammetry.This method provides a research foundation for constructing mine slope model. 展开更多
关键词 Air-ground data fusion method Mini batch K-Medoids algorithm Ebow rule Optimal cluster number 3D laser scanning UAV tilt photogrammetry
下载PDF
A Study of EM Algorithm as an Imputation Method: A Model-Based Simulation Study with Application to a Synthetic Compositional Data
6
作者 Yisa Adeniyi Abolade Yichuan Zhao 《Open Journal of Modelling and Simulation》 2024年第2期33-42,共10页
Compositional data, such as relative information, is a crucial aspect of machine learning and other related fields. It is typically recorded as closed data or sums to a constant, like 100%. The statistical linear mode... Compositional data, such as relative information, is a crucial aspect of machine learning and other related fields. It is typically recorded as closed data or sums to a constant, like 100%. The statistical linear model is the most used technique for identifying hidden relationships between underlying random variables of interest. However, data quality is a significant challenge in machine learning, especially when missing data is present. The linear regression model is a commonly used statistical modeling technique used in various applications to find relationships between variables of interest. When estimating linear regression parameters which are useful for things like future prediction and partial effects analysis of independent variables, maximum likelihood estimation (MLE) is the method of choice. However, many datasets contain missing observations, which can lead to costly and time-consuming data recovery. To address this issue, the expectation-maximization (EM) algorithm has been suggested as a solution for situations including missing data. The EM algorithm repeatedly finds the best estimates of parameters in statistical models that depend on variables or data that have not been observed. This is called maximum likelihood or maximum a posteriori (MAP). Using the present estimate as input, the expectation (E) step constructs a log-likelihood function. Finding the parameters that maximize the anticipated log-likelihood, as determined in the E step, is the job of the maximization (M) phase. This study looked at how well the EM algorithm worked on a made-up compositional dataset with missing observations. It used both the robust least square version and ordinary least square regression techniques. The efficacy of the EM algorithm was compared with two alternative imputation techniques, k-Nearest Neighbor (k-NN) and mean imputation (), in terms of Aitchison distances and covariance. 展开更多
关键词 Compositional data Linear Regression Model Least Square method Robust Least Square method Synthetic data Aitchison Distance Maximum Likelihood Estimation Expectation-Maximization Algorithm k-Nearest Neighbor and Mean imputation
下载PDF
Application and evaluation of layering shear method in LADCP data processing
7
作者 Zijian Cui Chujin Liang +2 位作者 Binbin Guo Feilong Lin Yong Mu 《Acta Oceanologica Sinica》 SCIE CAS CSCD 2023年第12期9-21,共13页
The current velocity observation of LADCP(Lowered Acoustic Doppler Current Profiler)has the advantages of a large vertical range of observation and high operability compared with traditional current measurement method... The current velocity observation of LADCP(Lowered Acoustic Doppler Current Profiler)has the advantages of a large vertical range of observation and high operability compared with traditional current measurement methods,and is being widely used in the field of ocean observation.Shear and inverse methods are now commonly used by the international marine community to process LADCP data and calculate ocean current profiles.The two methods have their advantages and shortcomings.The shear method calculates the value of current shear more accurately,while the accuracy in an absolute value of the current is lower.The inverse method calculates the absolute value of the current velocity more accurately,but the current shear is less accurate.Based on the shear method,this paper proposes a layering shear method to calculate the current velocity profile by“layering averaging”,and proposes corresponding current calculation methods according to the different types of problems in several field observation data from the western Pacific,forming an independent LADCP data processing system.The comparison results have shown that the layering shear method can achieve the same effect as the inverse method in the calculation of the absolute value of current velocity,while retaining the advantages of the shear method in the calculation of a value of the current shear. 展开更多
关键词 LADCP data processing layering shear method Western Pacific
下载PDF
Quantitative Analysis of Seeing with Height and Time at Muztagh-Ata Site Based on ERA5 Database
8
作者 Xiao-Qi Wu Cun-Ying Xiao +3 位作者 Ali Esamdin Jing Xu Ze-Wei Wang Luo Xiao 《Research in Astronomy and Astrophysics》 SCIE CAS CSCD 2024年第1期87-95,共9页
Seeing is an important index to evaluate the quality of an astronomical site.To estimate seeing at the Muztagh-Ata site with height and time quantitatively,the European Centre for Medium-Range Weather Forecasts reanal... Seeing is an important index to evaluate the quality of an astronomical site.To estimate seeing at the Muztagh-Ata site with height and time quantitatively,the European Centre for Medium-Range Weather Forecasts reanalysis database(ERA5)is used.Seeing calculated from ERA5 is compared consistently with the Differential Image Motion Monitor seeing at the height of 12 m.Results show that seeing decays exponentially with height at the Muztagh-Ata site.Seeing decays the fastest in fall in 2021 and most slowly with height in summer.The seeing condition is better in fall than in summer.The median value of seeing at 12 m is 0.89 arcsec,the maximum value is1.21 arcsec in August and the minimum is 0.66 arcsec in October.The median value of seeing at 12 m is 0.72arcsec in the nighttime and 1.08 arcsec in the daytime.Seeing is a combination of annual and about biannual variations with the same phase as temperature and wind speed indicating that seeing variation with time is influenced by temperature and wind speed.The Richardson number Ri is used to analyze the atmospheric stability and the variations of seeing are consistent with Ri between layers.These quantitative results can provide an important reference for a telescopic observation strategy. 展开更多
关键词 site testing atmospheric effects methods:data analysis telescopes EARTH
下载PDF
Dealing with the Data Imbalance Problem in Pulsar Candidate Sifting Based on Feature Selection
9
作者 Haitao Lin Xiangru Li 《Research in Astronomy and Astrophysics》 SCIE CAS CSCD 2024年第2期125-137,共13页
Pulsar detection has become an active research topic in radio astronomy recently.One of the essential procedures for pulsar detection is pulsar candidate sifting(PCS),a procedure for identifying potential pulsar signa... Pulsar detection has become an active research topic in radio astronomy recently.One of the essential procedures for pulsar detection is pulsar candidate sifting(PCS),a procedure for identifying potential pulsar signals in a survey.However,pulsar candidates are always class-imbalanced,as most candidates are non-pulsars such as RFI and only a tiny part of them are from real pulsars.Class imbalance can greatly affect the performance of machine learning(ML)models,resulting in a heavy cost as some real pulsars are misjudged.To deal with the problem,techniques of choosing relevant features to discriminate pulsars from non-pulsars are focused on,which is known as feature selection.Feature selection is a process of selecting a subset of the most relevant features from a feature pool.The distinguishing features between pulsars and non-pulsars can significantly improve the performance of the classifier even if the data are highly imbalanced.In this work,an algorithm for feature selection called the K-fold Relief-Greedy(KFRG)algorithm is designed.KFRG is a two-stage algorithm.In the first stage,it filters out some irrelevant features according to their K-fold Relief scores,while in the second stage,it removes the redundant features and selects the most relevant features by a forward greedy search strategy.Experiments on the data set of the High Time Resolution Universe survey verified that ML models based on KFRG are capable of PCS,correctly separating pulsars from non-pulsars even if the candidates are highly class-imbalanced. 展开更多
关键词 methods data analysis-(stars:)pulsars general-methods statistical
下载PDF
An Innovative K-Anonymity Privacy-Preserving Algorithm to Improve Data Availability in the Context of Big Data
10
作者 Linlin Yuan Tiantian Zhang +2 位作者 Yuling Chen Yuxiang Yang Huang Li 《Computers, Materials & Continua》 SCIE EI 2024年第4期1561-1579,共19页
The development of technologies such as big data and blockchain has brought convenience to life,but at the same time,privacy and security issues are becoming more and more prominent.The K-anonymity algorithm is an eff... The development of technologies such as big data and blockchain has brought convenience to life,but at the same time,privacy and security issues are becoming more and more prominent.The K-anonymity algorithm is an effective and low computational complexity privacy-preserving algorithm that can safeguard users’privacy by anonymizing big data.However,the algorithm currently suffers from the problem of focusing only on improving user privacy while ignoring data availability.In addition,ignoring the impact of quasi-identified attributes on sensitive attributes causes the usability of the processed data on statistical analysis to be reduced.Based on this,we propose a new K-anonymity algorithm to solve the privacy security problem in the context of big data,while guaranteeing improved data usability.Specifically,we construct a new information loss function based on the information quantity theory.Considering that different quasi-identification attributes have different impacts on sensitive attributes,we set weights for each quasi-identification attribute when designing the information loss function.In addition,to reduce information loss,we improve K-anonymity in two ways.First,we make the loss of information smaller than in the original table while guaranteeing privacy based on common artificial intelligence algorithms,i.e.,greedy algorithm and 2-means clustering algorithm.In addition,we improve the 2-means clustering algorithm by designing a mean-center method to select the initial center of mass.Meanwhile,we design the K-anonymity algorithm of this scheme based on the constructed information loss function,the improved 2-means clustering algorithm,and the greedy algorithm,which reduces the information loss.Finally,we experimentally demonstrate the effectiveness of the algorithm in improving the effect of 2-means clustering and reducing information loss. 展开更多
关键词 Blockchain big data K-ANONYMITY 2-means clustering greedy algorithm mean-center method
下载PDF
Parameter Estimation of a Valve-Controlled Cylinder System Model Based on Bench Test and Operating Data Fusion
11
作者 Deying Su Shaojie Wang +3 位作者 Haojing Lin Xiaosong Xia Yubing Xu Liang Hou 《Chinese Journal of Mechanical Engineering》 SCIE EI CAS CSCD 2024年第2期247-263,共17页
The accurate estimation of parameters is the premise for establishing a high-fidelity simulation model of a valve-controlled cylinder system.Bench test data are easily obtained,but it is challenging to emulate actual ... The accurate estimation of parameters is the premise for establishing a high-fidelity simulation model of a valve-controlled cylinder system.Bench test data are easily obtained,but it is challenging to emulate actual loads in the research on parameter estimation of valve-controlled cylinder system.Despite the actual load information contained in the operating data of the control valve,its acquisition remains challenging.This paper proposes a method that fuses bench test and operating data for parameter estimation to address the aforementioned problems.The proposed method is based on Bayesian theory,and its core is a pool fusion of prior information from bench test and operating data.Firstly,a system model is established,and the parameters in the model are analysed.Secondly,the bench and operating data of the system are collected.Then,the model parameters and weight coefficients are estimated using the data fusion method.Finally,the estimated effects of the data fusion method,Bayesian method,and particle swarm optimisation(PSO)algorithm on system model parameters are compared.The research shows that the weight coefficient represents the contribution of different prior information to the parameter estimation result.The effect of parameter estimation based on the data fusion method is better than that of the Bayesian method and the PSO algorithm.Increasing load complexity leads to a decrease in model accuracy,highlighting the crucial role of the data fusion method in parameter estimation studies. 展开更多
关键词 Valve-controlled cylinder system Parameter estimation The Bayesian theory data fusion method Weight coefficients
下载PDF
Machine Learning-based Identification of Contaminated Images in Light Curve Data Preprocessing
12
作者 Hui Li Rong-Wang Li +1 位作者 Peng Shu Yu-Qiang Li 《Research in Astronomy and Astrophysics》 SCIE CAS CSCD 2024年第4期287-295,共9页
Attitude is one of the crucial parameters for space objects and plays a vital role in collision prediction and debris removal.Analyzing light curves to determine attitude is the most commonly used method.In photometri... Attitude is one of the crucial parameters for space objects and plays a vital role in collision prediction and debris removal.Analyzing light curves to determine attitude is the most commonly used method.In photometric observations,outliers may exist in the obtained light curves due to various reasons.Therefore,preprocessing is required to remove these outliers to obtain high quality light curves.Through statistical analysis,the reasons leading to outliers can be categorized into two main types:first,the brightness of the object significantly increases due to the passage of a star nearby,referred to as“stellar contamination,”and second,the brightness markedly decreases due to cloudy cover,referred to as“cloudy contamination.”The traditional approach of manually inspecting images for contamination is time-consuming and labor-intensive.However,we propose the utilization of machine learning methods as a substitute.Convolutional Neural Networks and SVMs are employed to identify cases of stellar contamination and cloudy contamination,achieving F1 scores of 1.00 and 0.98 on a test set,respectively.We also explore other machine learning methods such as ResNet-18 and Light Gradient Boosting Machine,then conduct comparative analyses of the results. 展开更多
关键词 techniques:image processing methods:data analysis light pollution
下载PDF
Lossless Compression Method for the Magnetic and Helioseismic Imager(MHI)Payload
13
作者 Li-Yue Tong Jia-Ben Lin +4 位作者 Yuan-Yong Deng Kai-Fan Ji Jun-Feng Hou Quan Wang Xiao Yang 《Research in Astronomy and Astrophysics》 SCIE CAS CSCD 2024年第4期214-221,共8页
The Solar Polar-orbit Observatory(SPO),proposed by Chinese scientists,is designed to observe the solar polar regions in an unprecedented way with a spacecraft traveling in a large solar inclination angle and a small e... The Solar Polar-orbit Observatory(SPO),proposed by Chinese scientists,is designed to observe the solar polar regions in an unprecedented way with a spacecraft traveling in a large solar inclination angle and a small ellipticity.However,one of the most significant challenges lies in ultra-long-distance data transmission,particularly for the Magnetic and Helioseismic Imager(MHI),which is the most important payload and generates the largest volume of data in SPO.In this paper,we propose a tailored lossless data compression method based on the measurement mode and characteristics of MHI data.The background out of the solar disk is removed to decrease the pixel number of an image under compression.Multiple predictive coding methods are combined to eliminate the redundancy utilizing the correlation(space,spectrum,and polarization)in data set,improving the compression ratio.Experimental results demonstrate that our method achieves an average compression ratio of 3.67.The compression time is also less than the general observation period.The method exhibits strong feasibility and can be easily adapted to MHI. 展开更多
关键词 methods:data analysis techniques:image processing Sun:magnetic fields Sun:photosphere
下载PDF
Fatigue Safety Assessment of Concrete Continuous Rigid Frame Bridge Based on Rain Flow Counting Method and Health Monitoring Data
14
作者 Yinghua Li Junyong He +1 位作者 Xiaoqing Zeng Yanxing Tang 《Journal of Architectural Environment & Structural Engineering Research》 2023年第3期31-40,共10页
The fatigue of concrete structures will gradually appear after being subjected to alternating loads for a long time,and the accidents caused by fatigue failure of bridge structures also appear from time to time.Aiming... The fatigue of concrete structures will gradually appear after being subjected to alternating loads for a long time,and the accidents caused by fatigue failure of bridge structures also appear from time to time.Aiming at the problem of degradation of long-span continuous rigid frame bridges due to fatigue and environmental effects,this paper suggests a method to analyze the fatigue degradation mechanism of this type of bridge,which combines long-term in-site monitoring data collected by the health monitoring system(HMS)and fatigue theory.In the paper,the authors mainly carry out the research work in the following aspects:First of all,a long-span continuous rigid frame bridge installed with HMS is used as an example,and a large amount of health monitoring data have been acquired,which can provide efficient information for fatigue in terms of equivalent stress range and cumulative number of stress cycles;next,for calculating the cumulative fatigue damage of the bridge structure,fatigue stress spectrum got by rain flow counting method,S-N curves and damage criteria are used for fatigue damage analysis.Moreover,it was considered a linear accumulation damage through the Palmgren-Miner rule for the counting of stress cycles.The health monitoring data are adopted to obtain fatigue stress data and the rain flow counting method is used to count the amplitude varying fatigue stress.The proposed fatigue reliability approach in the paper can estimate the fatigue damage degree and its evolution law of bridge structures well,and also can help bridge engineers do the assessment of future service duration. 展开更多
关键词 Long-span continuous rigid frame bridge Rain flow counting method Fatigue performance Health monitoring system Strain monitoring data
下载PDF
STUDY ON THE ADJOINT METHOD IN DATA ASSIMILATION AND THE RELATED PROBLEMS 被引量:8
15
作者 吕咸青 吴自库 +1 位作者 谷艺 田纪伟 《应用数学和力学》 EI CSCD 北大核心 2004年第6期581-590,共10页
It is not reasonable that one can only use the adjoint of model in data assimilation. The simulated numerical experiment shows that for the tidal model, the result of the adjoint of equation is almost the same as that... It is not reasonable that one can only use the adjoint of model in data assimilation. The simulated numerical experiment shows that for the tidal model, the result of the adjoint of equation is almost the same as that of the adjoint of model: the averaged absolute difference of the amplitude between observations and simulation is less than 5.0 cm and that of the phase-lag is less than 5.0°. The results are both in good agreement with the observed M2 tide in the Bohai Sea and the Yellow Sea. For comparison, the traditional methods also have been used to simulate M2 tide in the Bohai Sea and the Yellow Sea. The initial guess values of the boundary conditions are given first, and then are adjusted to acquire the simulated results that are as close as possible to the observations. As the boundary conditions contain 72 values, which should be adjusted and how to adjust them can only be partially solved by adjusting them many times. The satisfied results are hard to acquire even gigantic efforts are done. Here, the automation of the treatment of the open boundary conditions is realized. The method is unique and superior to the traditional methods. It is emphasized that if the adjoint of equation is used, tedious and complicated mathematical deduction can be avoided. Therefore the adjoint of equation should attract much attention. 展开更多
关键词 数据同化 变分分析 伴随方法 潮汐 开边界条件
下载PDF
DEVELOPMENT OF A DATA MINING METHOD FOR LAND CONTROL 被引量:3
16
作者 Wang Shuliang Wang Xinzhou Shi Wenzhong 《Geo-Spatial Information Science》 2001年第1期68-76,共9页
Land resources are facing crises of being misused,especially for an intersection area between town and country,and land control has to be enforced.This paper presents a development of data mining method for land contr... Land resources are facing crises of being misused,especially for an intersection area between town and country,and land control has to be enforced.This paper presents a development of data mining method for land control.A vector_match method for the prerequisite of data mining i.e., data cleaning is proposed,which deals with both character and numeric data via vectorizing character_string and matching number.A minimal decision algorithm of rough set is used to discover the knowledge hidden in the data warehouse.In order to monitor land use dynamically and accurately,it is suggested to set up a real_time land control system based on GPS,digital photogrammetry and online data mining.Finally,the means is applied in the intersection area between town and country of Wuhan city,and a set of knowledge about land control is discovered. 展开更多
关键词 LAND control data MINING vector-match method ROUGH SET GIS
下载PDF
Application of a Bayesian method to data-poor stock assessment by using Indian Ocean albacore (Thunnus alalunga) stock assessment as an example 被引量:14
17
作者 GUAN Wenjiang TANG Lin +2 位作者 ZHU Jiangfeng TIAN Siquan XU Liuxiong 《Acta Oceanologica Sinica》 SCIE CAS CSCD 2016年第2期117-125,共9页
It is widely recognized that assessments of the status of data-poor fish stocks are challenging and that Bayesian analysis is one of the methods which can be used to improve the reliability of stock assessments in dat... It is widely recognized that assessments of the status of data-poor fish stocks are challenging and that Bayesian analysis is one of the methods which can be used to improve the reliability of stock assessments in data-poor situations through borrowing strength from prior information deduced from species with good-quality data or other known information. Because there is considerable uncertainty remaining in the stock assessment of albacore tuna(Thunnus alalunga) in the Indian Ocean due to the limited and low-quality data, we investigate the advantages of a Bayesian method in data-poor stock assessment by using Indian Ocean albacore stock assessment as an example. Eight Bayesian biomass dynamics models with different prior assumptions and catch data series were developed to assess the stock. The results show(1) the rationality of choice of catch data series and assumption of parameters could be enhanced by analyzing the posterior distribution of the parameters;(2) the reliability of the stock assessment could be improved by using demographic methods to construct a prior for the intrinsic rate of increase(r). Because we can make use of more information to improve the rationality of parameter estimation and the reliability of the stock assessment compared with traditional statistical methods by incorporating any available knowledge into the informative priors and analyzing the posterior distribution based on Bayesian framework in data-poor situations, we suggest that the Bayesian method should be an alternative method to be applied in data-poor species stock assessment, such as Indian Ocean albacore. 展开更多
关键词 data-poor stock assessment Bayesian method catch data series demographic method Indian Ocean Thunnus alalunga
下载PDF
Fractal Method for Statistical Analysis Geological Data 被引量:2
18
作者 Meng Xianguo Zhao PengdaChina University of Geosciences , Wuhan 430074 《Journal of Earth Science》 SCIE CAS CSCD 1991年第1期114-119,共6页
This paper establishes the phase space in the light of spacial series data , discusses the fractal structure of geological data in terms of correlated functions and studies the chaos of these data . In addition , it i... This paper establishes the phase space in the light of spacial series data , discusses the fractal structure of geological data in terms of correlated functions and studies the chaos of these data . In addition , it introduces the R/S analysis for time series analysis into spacial series to calculate the structural fractal dimensions of ranges and standard deviation for spacial series data -and to establish the fractal dimension matrix and the procedures in plotting the fractal dimension anomaly diagram with vector distances of fractal dimension . At last , it has examples of its application . 展开更多
关键词 geological data fractal method fractal dimension space series R/S analysis .
下载PDF
Discussion of the Evaluation Method and Value of Green Building's POE in the Era of Large Data 被引量:2
19
作者 Bi-Feng Zhu Jian Ge 《Journal of Harbin Institute of Technology(New Series)》 EI CAS 2014年第5期10-14,共5页
China vigorously is carrying out the construction of green building and ecological city during " The 12 th five-Year plan". Now,although the identification system of design and operation have been implemente... China vigorously is carrying out the construction of green building and ecological city during " The 12 th five-Year plan". Now,although the identification system of design and operation have been implemented in the evaluation of green building,lacking of appropriate evaluation after use. The actual operation results of many buildings,which have got a green building logo,are not satisfied from the user 's perspective. In this paper,an evaluation method that combines the actual building energy consumption and users' satisfaction has been discussed,based on the post occupancy evaluation( POE) theory and the Big data technology. Through the comparison and analysis of building objective operational metrics and users subjective feelings indicators,the green buildings' POE has been achieved. Finally,the study analyzes the assessed value of green building's POE from the three-time dimensions,short-term,medium-term and long-term. And the outlook of the direction is looking forward to the follow-up study. 展开更多
关键词 Green building POE Big data Evaluation method VALUE
下载PDF
Study on wave energy resource assessing method based on altimeter data——A case study in Northwest Pacific 被引量:5
20
作者 WAN Yong ZHANG Jie +2 位作者 MENG Junmin WANG Jing DAI Yongshou 《Acta Oceanologica Sinica》 SCIE CAS CSCD 2016年第3期117-129,共13页
Wave energy resource is a very important ocean renewable energy. A reliable assessment of wave energy resources must be performed before they can be exploited. Compared with wave model, altimeter can provide more accu... Wave energy resource is a very important ocean renewable energy. A reliable assessment of wave energy resources must be performed before they can be exploited. Compared with wave model, altimeter can provide more accurate in situ observations for ocean wave which can be as a novel method for wave energy assessment.The advantage of altimeter data is to provide accurate significant wave height observations for wave. In order to develop characteristic and advantage of altimeter data and apply altimeter data to wave energy assessment, in this study, we established an assessing method for wave energy in local sea area which is dedicated to altimeter data.This method includes three parts including data selection and processing, establishment of evaluation indexes system and criterion of regional division. Then a case study of Northwest Pacific was performed to discuss specific application for this method. The results show that assessing method in this paper can assess reserves and temporal and spatial distribution effectively and provide scientific references for the siting of wave power plants and the design of wave energy convertors. 展开更多
关键词 altimeter data wave energy resources assessment assessing method Northwest Pacific wave power density
下载PDF
上一页 1 2 250 下一页 到第
使用帮助 返回顶部