期刊文献+
共找到5篇文章
< 1 >
每页显示 20 50 100
Scattered Data Interpolation Using Cubic Trigonometric Bézier
1
作者 Ishak Hashim Nur Nabilah Che Draman +2 位作者 Samsul Ariffin Abdul Karim Wee Ping Yeo Dumitru Baleanu 《Computers, Materials & Continua》 SCIE EI 2021年第10期221-236,共16页
This paper discusses scattered data interpolation using cubic trigonometric Bézier triangular patches with C1 continuity everywhere.We derive the C1 condition on each adjacent triangle.On each triangular patch,we... This paper discusses scattered data interpolation using cubic trigonometric Bézier triangular patches with C1 continuity everywhere.We derive the C1 condition on each adjacent triangle.On each triangular patch,we employ convex combination method between three local schemes.The final interpolant with the rational corrected scheme is suitable for regular and irregular scattered data sets.We tested the proposed scheme with 36,65,and 100 data points for some well-known test functions.The scheme is also applied to interpolate the data for the electric potential.We compared the performance between our proposed method and existing scattered data interpolation schemes such as Powell–Sabin(PS)and Clough–Tocher(CT)by measuring the maximum error,root mean square error(RMSE)and coefficient of determination(R^(2)).From the results obtained,our proposed method is competent with cubic Bézier,cubic Ball,PS and CT triangles splitting schemes to interpolate scattered data surface.This is very significant since PS and CT requires that each triangle be splitting into several micro triangles. 展开更多
关键词 Cubic trigonometric Bézier triangular patches C1sufficient condition scattered data interpolation
下载PDF
Research on Interpolation Method for Missing Electricity Consumption Data
2
作者 Junde Chen Jiajia Yuan +3 位作者 Weirong Chen Adnan Zeb Md Suzauddola Yaser A.Nanehkaran 《Computers, Materials & Continua》 SCIE EI 2024年第2期2575-2591,共17页
Missing value is one of the main factors that cause dirty data.Without high-quality data,there will be no reliable analysis results and precise decision-making.Therefore,the data warehouse needs to integrate high-qual... Missing value is one of the main factors that cause dirty data.Without high-quality data,there will be no reliable analysis results and precise decision-making.Therefore,the data warehouse needs to integrate high-quality data consistently.In the power system,the electricity consumption data of some large users cannot be normally collected resulting in missing data,which affects the calculation of power supply and eventually leads to a large error in the daily power line loss rate.For the problem of missing electricity consumption data,this study proposes a group method of data handling(GMDH)based data interpolation method in distribution power networks and applies it in the analysis of actually collected electricity data.First,the dependent and independent variables are defined from the original data,and the upper and lower limits of missing values are determined according to prior knowledge or existing data information.All missing data are randomly interpolated within the upper and lower limits.Then,the GMDH network is established to obtain the optimal complexity model,which is used to predict the missing data to replace the last imputed electricity consumption data.At last,this process is implemented iteratively until the missing values do not change.Under a relatively small noise level(α=0.25),the proposed approach achieves a maximum error of no more than 0.605%.Experimental findings demonstrate the efficacy and feasibility of the proposed approach,which realizes the transformation from incomplete data to complete data.Also,this proposed data interpolation approach provides a strong basis for the electricity theft diagnosis and metering fault analysis of electricity enterprises. 展开更多
关键词 data interpolation GMDH electricity consumption data distribution system
下载PDF
Spatio-temporal variability of surface chlorophyll a in the Yellow Sea and the East China Sea based on reconstructions of satellite data of 2001-2020
3
作者 Weichen XIE Tao WANG Wensheng JIANG 《Journal of Oceanology and Limnology》 SCIE CAS CSCD 2024年第2期390-407,共18页
Chlorophyll-a(Chl-a)concentration is a primary indicator for marine environmental monitoring.The spatio-temporal variations of sea surface Chl-a concentration in the Yellow Sea(YS)and the East China Sea(ECS)in 2001-20... Chlorophyll-a(Chl-a)concentration is a primary indicator for marine environmental monitoring.The spatio-temporal variations of sea surface Chl-a concentration in the Yellow Sea(YS)and the East China Sea(ECS)in 2001-2020 were investigated by reconstructing the MODIS Level 3 products with the data interpolation empirical orthogonal function(DINEOF)method.The reconstructed results by interpolating the combined MODIS daily+8-day datasets were found better than those merely by interpolating daily or 8-day data.Chl-a concentration in the YS and the ECS reached its maximum in spring,with blooms occurring,decreased in summer and autumn,and increased in late autumn and early winter.By performing empirical orthogonal function(EOF)decomposition of the reconstructed data fields and correlation analysis with several potential environmental factors,we found that the sea surface temperature(SST)plays a significant role in the seasonal variation of Chl a,especially during spring and summer.The increase of SST in spring and the upper-layer nutrients mixed up during the last winter might favor the occurrence of spring blooms.The high sea surface temperature(SST)throughout the summer would strengthen the vertical stratification and prevent nutrients supply from deep water,resulting in low surface Chl-a concentrations.The sea surface Chl-a concentration in the YS was found decreased significantly from 2012 to 2020,which was possibly related to the Pacific Decadal Oscillation(PDO). 展开更多
关键词 chlorophyll a(Chl a) data interpolation empirical orthogonal function(DINEOF) empirical orthogonal function(EOF)analysis Yellow Sea East China Sea
下载PDF
Brittleness index predictions from Lower Barnett Shale well-log data applying an optimized data matching algorithm at various sampling densities 被引量:1
4
作者 David A.Wood 《Geoscience Frontiers》 SCIE CAS CSCD 2021年第6期444-457,共14页
The capability of accurately predicting mineralogical brittleness index (BI) from basic suites of well logs is desirable as it provides a useful indicator of the fracability of tight formations.Measuring mineralogical... The capability of accurately predicting mineralogical brittleness index (BI) from basic suites of well logs is desirable as it provides a useful indicator of the fracability of tight formations.Measuring mineralogical components in rocks is expensive and time consuming.However,the basic well log curves are not well correlated with BI so correlation-based,machine-learning methods are not able to derive highly accurate BI predictions using such data.A correlation-free,optimized data-matching algorithm is configured to predict BI on a supervised basis from well log and core data available from two published wells in the Lower Barnett Shale Formation (Texas).This transparent open box (TOB) algorithm matches data records by calculating the sum of squared errors between their variables and selecting the best matches as those with the minimum squared errors.It then applies optimizers to adjust weights applied to individual variable errors to minimize the root mean square error (RMSE)between calculated and predicted (BI).The prediction accuracy achieved by TOB using just five well logs (Gr,ρb,Ns,Rs,Dt) to predict BI is dependent on the density of data records sampled.At a sampling density of about one sample per 0.5 ft BI is predicted with RMSE~0.056 and R^(2)~0.790.At a sampling density of about one sample per0.1 ft BI is predicted with RMSE~0.008 and R^(2)~0.995.Adding a stratigraphic height index as an additional (sixth)input variable method improves BI prediction accuracy to RMSE~0.003 and R^(2)~0.999 for the two wells with only 1 record in 10,000 yielding a BI prediction error of>±0.1.The model has the potential to be applied in an unsupervised basis to predict BI from basic well log data in surrounding wells lacking mineralogical measurements but with similar lithofacies and burial histories.The method could also be extended to predict elastic rock properties in and seismic attributes from wells and seismic data to improve the precision of brittleness index and fracability mapping spatially. 展开更多
关键词 Well-log brittleness index estimates data record sample densities Zoomed-in data interpolation Correlation-free prediction analysis Mineralogical and elastic influences
下载PDF
Random Low Patch⁃rank Method for Interpolation of Regularly Missing Traces
5
作者 Jianwei Ma 《Journal of Harbin Institute of Technology(New Series)》 EI CAS 2020年第3期205-216,共12页
Assuming seismic data in a suitable domain is low rank while missing traces or noises increase the rank of the data matrix,the rank⁃reduced methods have been applied successfully for seismic interpolation and denoisin... Assuming seismic data in a suitable domain is low rank while missing traces or noises increase the rank of the data matrix,the rank⁃reduced methods have been applied successfully for seismic interpolation and denoising.These rank⁃reduced methods mainly include Cadzow reconstruction that uses eigen decomposition of the Hankel matrix in the f⁃x(frequency⁃spatial)domain,and nuclear⁃norm minimization(NNM)based on rigorous optimization theory on matrix completion(MC).In this paper,a low patch⁃rank MC is proposed with a random⁃overlapped texture⁃patch mapping for interpolation of regularly missing traces in a three⁃dimensional(3D)seismic volume.The random overlap plays a simple but important role to make the low⁃rank method effective for aliased data.It shifts the regular column missing of data matrix to random point missing in the mapped matrix,where the missing data increase the rank thus the classic low⁃rank MC theory works.Unlike the Hankel matrix based rank⁃reduced method,the proposed method does not assume a superposition of linear events,but assumes the data have repeated texture patterns.Such data lead to a low⁃rank matrix after the proposed texture⁃patch mapping.Thus the methods can interpolate the waveforms with varying dips in space.A fast low⁃rank factorization method and an orthogonal rank⁃one matrix pursuit method are applied to solve the presented interpolation model.The former avoids the singular value decomposition(SVD)computation and the latter only needs to compute the large singular values during iterations.The two fast algorithms are suitable for large⁃scale data.Simple averaging realizations of several results from different random⁃overlapped texture⁃patch mappings can further increase the reconstructed signal⁃to⁃noise ratio(SNR).Examples on synthetic data and field data are provided to show successful performance of the presented method. 展开更多
关键词 seismic data interpolation low⁃rank method random patch geophysics
下载PDF
上一页 1 下一页 到第
使用帮助 返回顶部