This paper presents a simple nonparametric regression approach to data-driven computing in elasticity. We apply the kernel regression to the material data set, and formulate a system of nonlinear equations solved to o...This paper presents a simple nonparametric regression approach to data-driven computing in elasticity. We apply the kernel regression to the material data set, and formulate a system of nonlinear equations solved to obtain a static equilibrium state of an elastic structure. Preliminary numerical experiments illustrate that, compared with existing methods, the proposed method finds a reasonable solution even if data points distribute coarsely in a given material data set.展开更多
The exponential growth of data necessitates an effective data storage scheme,which helps to effectively manage the large quantity of data.To accomplish this,Deoxyribonucleic Acid(DNA)digital data storage process can b...The exponential growth of data necessitates an effective data storage scheme,which helps to effectively manage the large quantity of data.To accomplish this,Deoxyribonucleic Acid(DNA)digital data storage process can be employed,which encodes and decodes binary data to and from synthesized strands of DNA.Vector quantization(VQ)is a commonly employed scheme for image compression and the optimal codebook generation is an effective process to reach maximum compression efficiency.This article introduces a newDNAComputingwithWater StriderAlgorithm based Vector Quantization(DNAC-WSAVQ)technique for Data Storage Systems.The proposed DNAC-WSAVQ technique enables encoding data using DNA computing and then compresses it for effective data storage.Besides,the DNAC-WSAVQ model initially performsDNA encoding on the input images to generate a binary encoded form.In addition,aWater Strider algorithm with Linde-Buzo-Gray(WSA-LBG)model is applied for the compression process and thereby storage area can be considerably minimized.In order to generate optimal codebook for LBG,the WSA is applied to it.The performance validation of the DNAC-WSAVQ model is carried out and the results are inspected under several measures.The comparative study highlighted the improved outcomes of the DNAC-WSAVQ model over the existing methods.展开更多
Until recently, computational power was insufficient to diagonalize atmospheric datasets of order 108 - 109 elements. Eigenanalysis of tens of thousands of variables now can achieve massive data compression for spatia...Until recently, computational power was insufficient to diagonalize atmospheric datasets of order 108 - 109 elements. Eigenanalysis of tens of thousands of variables now can achieve massive data compression for spatial fields with strong correlation properties. Application of eigenanalysis to 26,394 variable dimensions, for three severe weather datasets (tornado, hail and wind) retains 9 - 11 principal components explaining 42% - 52% of the variability. Rotated principal components (RPCs) detect localized coherent data variance structures for each outbreak type and are related to standardized anomalies of the meteorological fields. Our analyses of the RPC loadings and scores show that these graphical displays can efficiently reduce and interpret large datasets. Data is analyzed 24 hours prior to severe weather as a forecasting aid. RPC loadings of sea-level pressure fields show different morphology loadings for each outbreak type. Analysis of low level moisture and temperature RPCs suggests moisture fields for hail and wind which are more related than for tornado outbreaks. Consequently, these patterns can identify precursors of severe weather and discriminate between tornadic and non-tornadic outbreaks.展开更多
The cloud computing platform has the functions of efficiently allocating the dynamic resources, generating the dynamic computing and storage according to the user requests, and providing the good platform for the big ...The cloud computing platform has the functions of efficiently allocating the dynamic resources, generating the dynamic computing and storage according to the user requests, and providing the good platform for the big data feature analysis and mining. The big data feature mining in the cloud computing environment is an effective method for the elficient application of the massive data in the information age. In the process of the big data mining, the method o f the big data feature mining based on the gradient sampling has the poor logicality. It only mines the big data features from a single-level perspective, which reduces the precision of the big data feature mining.展开更多
To get the high compression ratio as well as the high-quality reconstructed image, an effective image compression scheme named irregular segmentation region coding based on spiking cortical model(ISRCS) is presented...To get the high compression ratio as well as the high-quality reconstructed image, an effective image compression scheme named irregular segmentation region coding based on spiking cortical model(ISRCS) is presented. This scheme is region-based and mainly focuses on two issues. Firstly, an appropriate segmentation algorithm is developed to partition an image into some irregular regions and tidy contours, where the crucial regions corresponding to objects are retained and a lot of tiny parts are eliminated. The irregular regions and contours are coded using different methods respectively in the next step. The other issue is the coding method of contours where an efficient and novel chain code is employed. This scheme tries to find a compromise between the quality of reconstructed images and the compression ratio. Some principles and experiments are conducted and the results show its higher performance compared with other compression technologies, in terms of higher quality of reconstructed images, higher compression ratio and less time consuming.展开更多
The flow around airfoil NACA0012 enwrapped by the body-fitted grid is simulated by a coupled doubledistribution-function (DDF) lattice Boltzmann method (LBM) for the compressible Navier-Stokes equations. Firstly, ...The flow around airfoil NACA0012 enwrapped by the body-fitted grid is simulated by a coupled doubledistribution-function (DDF) lattice Boltzmann method (LBM) for the compressible Navier-Stokes equations. Firstly, the method is tested by simulating the low Reynolds number flow at Ma =0. 5,a=0. 0, Re=5 000. Then the simulation of flow around the airfoil is carried out at Ma:0. 5, 0. 85, 1.2; a=-0.05, 1.0, 0.0, respectively. And a better result is obtained by using a local refined grid. It reduces the error produced by the grid at Ma=0. 85. Though the inviscid boundary condition is used to avoid the problem of flow transition to turbulence at high Reynolds numbers, the pressure distribution obtained by the simulation agrees well with that of the experimental results. Thus, it proves the reliability of the method and shows its potential for the compressible flow simulation. The suecessful application to the flow around airfoil lays a foundation of the numerical simulation of turbulence.展开更多
Quantitatively correcting the unconfined compressive strength for sample disturbance is an important research project in the practice of ocean engineering and geotechnical engineering. In this study, the specimens of ...Quantitatively correcting the unconfined compressive strength for sample disturbance is an important research project in the practice of ocean engineering and geotechnical engineering. In this study, the specimens of undisturbed natural marine clay obtained from the same depth at the same site were deliberately disturbed to different levels. Then, the specimens with different extents of sample disturbance were trimmed for both oedometer tests and unconfined compression tests. The degree of sample disturbance SD is obtained from the oedometer test data. The relationship between the unconfined compressive strength q u and SD is studied for investigating the effect of sample disturbance on q u. It is found that the value of q u decreases linearly with the increase in SD. Then, a simple method of correcting q u for sample disturbance is proposed. Its validity is also verified through analysis of the existing published data.展开更多
The gravity gradient is a secondary derivative of gravity potential,containing more high-frequency information of Earth’s gravity field.Gravity gradient observation data require deducting its prior and intrinsic part...The gravity gradient is a secondary derivative of gravity potential,containing more high-frequency information of Earth’s gravity field.Gravity gradient observation data require deducting its prior and intrinsic parts to obtain more variational information.A model generated from a topographic surface database is more appropriate to represent gradiometric effects derived from near-surface mass,as other kinds of data can hardly reach the spatial resolution requirement.The rectangle prism method,namely an analytic integration of Newtonian potential integrals,is a reliable and commonly used approach to modeling gravity gradient,whereas its computing efficiency is extremely low.A modified rectangle prism method and a graphical processing unit(GPU)parallel algorithm were proposed to speed up the modeling process.The modified method avoided massive redundant computations by deforming formulas according to the symmetries of prisms’integral regions,and the proposed algorithm parallelized this method’s computing process.The parallel algorithm was compared with a conventional serial algorithm using 100 elevation data in two topographic areas(rough and moderate terrain).Modeling differences between the two algorithms were less than 0.1 E,which is attributed to precision differences between single-precision and double-precision float numbers.The parallel algorithm showed computational efficiency approximately 200 times higher than the serial algorithm in experiments,demonstrating its effective speeding up in the modeling process.Further analysis indicates that both the modified method and computational parallelism through GPU contributed to the proposed algorithm’s performances in experiments.展开更多
To achieve zero-defect production during computer numerical control(CNC)machining processes,it is imperative to develop effective diagnosis systems to detect anomalies efficiently.However,due to the dynamic conditions...To achieve zero-defect production during computer numerical control(CNC)machining processes,it is imperative to develop effective diagnosis systems to detect anomalies efficiently.However,due to the dynamic conditions of the machine and tooling during machining processes,the relevant diagnosis systems currently adopted in industries are incompetent.To address this issue,this paper presents a novel data-driven diagnosis system for anomalies.In this system,power data for condition monitoring are continuously collected during dynamic machining processes to support online diagnosis analysis.To facilitate the analysis,preprocessing mechanisms have been designed to de-noise,normalize,and align the monitored data.Important features are extracted from the monitored data and thresholds are defined to identify anomalies.Considering the dynamic conditions of the machine and tooling during machining processes,the thresholds used to identify anomalies can vary.Based on historical data,the values of thresholds are optimized using a fruit fly optimization(FFO)algorithm to achieve more accurate detection.Practical case studies were used to validate the system,thereby demonstrating the potential and effectiveness of the system for industrial applications.展开更多
A scheme for general purposed FDTD visual scientific computing software is introduced in this paper using object-oriented design (OOD) method. By abstracting the parameters of FDTD grids to an individual class and sep...A scheme for general purposed FDTD visual scientific computing software is introduced in this paper using object-oriented design (OOD) method. By abstracting the parameters of FDTD grids to an individual class and separating from the iteration procedure, the visual software can be adapted to more comprehensive computing problems. Real-time gray degree graphic and wave curve of the results can be achieved using DirectX technique. The special difference equation and data structure in dispersive medium are considered, and the peculiarity of parameters in perfectly matched layer are also discussed.展开更多
This work presents a new application for the Hierarchical Function Expansion Method for the solution of the Navier-Stokes equations for compressible fluids in two dimensions and in high velocity. This method is based ...This work presents a new application for the Hierarchical Function Expansion Method for the solution of the Navier-Stokes equations for compressible fluids in two dimensions and in high velocity. This method is based on the finite elements method using the Petrov-Galerkin formulation, know as SUPG (Streamline Upwind Petrov-Galerkin), applied with the expansion of the variables into hierarchical functions. To test and validate the numerical method proposed as well as the computational program developed simulations are performed for some cases whose theoretical solutions are known. These cases are the following: continuity test, stability and convergence test, temperature step problem, and several oblique shocks. The objective of the last cases is basically to verify the capture of the shock wave by the method developed. The results obtained in the simulations with the proposed method were good both qualitatively and quantitatively when compared with the theoretical solutions. This allows concluding that the objectives of this work are reached.展开更多
As there is datum redundancy in tradition database and temporal database in existence and the quantities of temporal database are increasing fleetly. We put forward compress storage tactics for temporal datum which co...As there is datum redundancy in tradition database and temporal database in existence and the quantities of temporal database are increasing fleetly. We put forward compress storage tactics for temporal datum which combine compress technology in existence in order to settle datum redundancy in the course of temporal datum storage and temporal datum of slow acting domain and momentary acting domain are accessed by using each from independence clock method and mutual clock method .We also bring forward strategy of gridding storage to resolve the problems of temporal datum rising rapidly.展开更多
The division operation is not frequent relatively in traditional applications, but it is increasingly indispensable and important in many modern applications. In this paper, the implementation of modified signed-digit...The division operation is not frequent relatively in traditional applications, but it is increasingly indispensable and important in many modern applications. In this paper, the implementation of modified signed-digit (MSD) floating-point division using Newton-Raphson method on the system of ternary optical computer (TOC) is studied. Since the addition of MSD floating-point is carry-free and the digit width of the system of TOC is large, it is easy to deal with the enough wide data and transform the division operation into multiplication and addition operations. And using data scan and truncation the problem of digits expansion is effectively solved in the range of error limit. The division gets the good results and the efficiency is high. The instance of MSD floating-point division shows that the method is feasible.展开更多
Under the situations of energy dilemma, energy Internet has become one of the most important technologies in international academic and industrial areas. However, massive small data from users, which are too scattered...Under the situations of energy dilemma, energy Internet has become one of the most important technologies in international academic and industrial areas. However, massive small data from users, which are too scattered and unsuitable for compression, can easily exhaust computational resources and lower random access possibility, thereby reducing system performance. Moreover, electric substations are sensitive to transmission latency of user data, such as controlling information. However, the traditional energy Internet usually could not meet requirements. Integrating mobile-edge computing makes energy Internet convenient for data acquisition,processing, management, and accessing. In this paper, we propose a novel framework for energy Internet to improve random access possibility and reduce transmission latency. This framework utilizes the local area network to collect data from users and makes conducting data compression for energy Internet possible. Simulation results show that this architecture can enhance random access possibility by a large margin and reduce transmission latency without extra energy consumption overhead.展开更多
基金supported by JSPS KAKENHI (Grants 17K06633 and 18K18898)
文摘This paper presents a simple nonparametric regression approach to data-driven computing in elasticity. We apply the kernel regression to the material data set, and formulate a system of nonlinear equations solved to obtain a static equilibrium state of an elastic structure. Preliminary numerical experiments illustrate that, compared with existing methods, the proposed method finds a reasonable solution even if data points distribute coarsely in a given material data set.
基金This research was supported in part by Basic Science Research Program through the National Research Foundation of Korea(NRF)funded by the Ministry of Education(NRF-2021R1A6A1A03039493)in part by the NRF grant funded by the Korea government(MSIT)(NRF-2022R1A2C1004401)in part by the 2022 Yeungnam University Research Grant.
文摘The exponential growth of data necessitates an effective data storage scheme,which helps to effectively manage the large quantity of data.To accomplish this,Deoxyribonucleic Acid(DNA)digital data storage process can be employed,which encodes and decodes binary data to and from synthesized strands of DNA.Vector quantization(VQ)is a commonly employed scheme for image compression and the optimal codebook generation is an effective process to reach maximum compression efficiency.This article introduces a newDNAComputingwithWater StriderAlgorithm based Vector Quantization(DNAC-WSAVQ)technique for Data Storage Systems.The proposed DNAC-WSAVQ technique enables encoding data using DNA computing and then compresses it for effective data storage.Besides,the DNAC-WSAVQ model initially performsDNA encoding on the input images to generate a binary encoded form.In addition,aWater Strider algorithm with Linde-Buzo-Gray(WSA-LBG)model is applied for the compression process and thereby storage area can be considerably minimized.In order to generate optimal codebook for LBG,the WSA is applied to it.The performance validation of the DNAC-WSAVQ model is carried out and the results are inspected under several measures.The comparative study highlighted the improved outcomes of the DNAC-WSAVQ model over the existing methods.
文摘Until recently, computational power was insufficient to diagonalize atmospheric datasets of order 108 - 109 elements. Eigenanalysis of tens of thousands of variables now can achieve massive data compression for spatial fields with strong correlation properties. Application of eigenanalysis to 26,394 variable dimensions, for three severe weather datasets (tornado, hail and wind) retains 9 - 11 principal components explaining 42% - 52% of the variability. Rotated principal components (RPCs) detect localized coherent data variance structures for each outbreak type and are related to standardized anomalies of the meteorological fields. Our analyses of the RPC loadings and scores show that these graphical displays can efficiently reduce and interpret large datasets. Data is analyzed 24 hours prior to severe weather as a forecasting aid. RPC loadings of sea-level pressure fields show different morphology loadings for each outbreak type. Analysis of low level moisture and temperature RPCs suggests moisture fields for hail and wind which are more related than for tornado outbreaks. Consequently, these patterns can identify precursors of severe weather and discriminate between tornadic and non-tornadic outbreaks.
文摘The cloud computing platform has the functions of efficiently allocating the dynamic resources, generating the dynamic computing and storage according to the user requests, and providing the good platform for the big data feature analysis and mining. The big data feature mining in the cloud computing environment is an effective method for the elficient application of the massive data in the information age. In the process of the big data mining, the method o f the big data feature mining based on the gradient sampling has the poor logicality. It only mines the big data features from a single-level perspective, which reduces the precision of the big data feature mining.
基金supported by the National Science Foundation of China(60872109)the Program for New Century Excellent Talents in University(NCET-06-0900)
文摘To get the high compression ratio as well as the high-quality reconstructed image, an effective image compression scheme named irregular segmentation region coding based on spiking cortical model(ISRCS) is presented. This scheme is region-based and mainly focuses on two issues. Firstly, an appropriate segmentation algorithm is developed to partition an image into some irregular regions and tidy contours, where the crucial regions corresponding to objects are retained and a lot of tiny parts are eliminated. The irregular regions and contours are coded using different methods respectively in the next step. The other issue is the coding method of contours where an efficient and novel chain code is employed. This scheme tries to find a compromise between the quality of reconstructed images and the compression ratio. Some principles and experiments are conducted and the results show its higher performance compared with other compression technologies, in terms of higher quality of reconstructed images, higher compression ratio and less time consuming.
基金Supported by the Aeronautical Science Foundation of China(20061453020)Foundation for Basic Research of Northwestern Polytechnical University(03)~~
文摘The flow around airfoil NACA0012 enwrapped by the body-fitted grid is simulated by a coupled doubledistribution-function (DDF) lattice Boltzmann method (LBM) for the compressible Navier-Stokes equations. Firstly, the method is tested by simulating the low Reynolds number flow at Ma =0. 5,a=0. 0, Re=5 000. Then the simulation of flow around the airfoil is carried out at Ma:0. 5, 0. 85, 1.2; a=-0.05, 1.0, 0.0, respectively. And a better result is obtained by using a local refined grid. It reduces the error produced by the grid at Ma=0. 85. Though the inviscid boundary condition is used to avoid the problem of flow transition to turbulence at high Reynolds numbers, the pressure distribution obtained by the simulation agrees well with that of the experimental results. Thus, it proves the reliability of the method and shows its potential for the compressible flow simulation. The suecessful application to the flow around airfoil lays a foundation of the numerical simulation of turbulence.
文摘Quantitatively correcting the unconfined compressive strength for sample disturbance is an important research project in the practice of ocean engineering and geotechnical engineering. In this study, the specimens of undisturbed natural marine clay obtained from the same depth at the same site were deliberately disturbed to different levels. Then, the specimens with different extents of sample disturbance were trimmed for both oedometer tests and unconfined compression tests. The degree of sample disturbance SD is obtained from the oedometer test data. The relationship between the unconfined compressive strength q u and SD is studied for investigating the effect of sample disturbance on q u. It is found that the value of q u decreases linearly with the increase in SD. Then, a simple method of correcting q u for sample disturbance is proposed. Its validity is also verified through analysis of the existing published data.
文摘The gravity gradient is a secondary derivative of gravity potential,containing more high-frequency information of Earth’s gravity field.Gravity gradient observation data require deducting its prior and intrinsic parts to obtain more variational information.A model generated from a topographic surface database is more appropriate to represent gradiometric effects derived from near-surface mass,as other kinds of data can hardly reach the spatial resolution requirement.The rectangle prism method,namely an analytic integration of Newtonian potential integrals,is a reliable and commonly used approach to modeling gravity gradient,whereas its computing efficiency is extremely low.A modified rectangle prism method and a graphical processing unit(GPU)parallel algorithm were proposed to speed up the modeling process.The modified method avoided massive redundant computations by deforming formulas according to the symmetries of prisms’integral regions,and the proposed algorithm parallelized this method’s computing process.The parallel algorithm was compared with a conventional serial algorithm using 100 elevation data in two topographic areas(rough and moderate terrain).Modeling differences between the two algorithms were less than 0.1 E,which is attributed to precision differences between single-precision and double-precision float numbers.The parallel algorithm showed computational efficiency approximately 200 times higher than the serial algorithm in experiments,demonstrating its effective speeding up in the modeling process.Further analysis indicates that both the modified method and computational parallelism through GPU contributed to the proposed algorithm’s performances in experiments.
基金funding from the EU Smarter project(PEOPLE-2013-IAPP-610675)
文摘To achieve zero-defect production during computer numerical control(CNC)machining processes,it is imperative to develop effective diagnosis systems to detect anomalies efficiently.However,due to the dynamic conditions of the machine and tooling during machining processes,the relevant diagnosis systems currently adopted in industries are incompetent.To address this issue,this paper presents a novel data-driven diagnosis system for anomalies.In this system,power data for condition monitoring are continuously collected during dynamic machining processes to support online diagnosis analysis.To facilitate the analysis,preprocessing mechanisms have been designed to de-noise,normalize,and align the monitored data.Important features are extracted from the monitored data and thresholds are defined to identify anomalies.Considering the dynamic conditions of the machine and tooling during machining processes,the thresholds used to identify anomalies can vary.Based on historical data,the values of thresholds are optimized using a fruit fly optimization(FFO)algorithm to achieve more accurate detection.Practical case studies were used to validate the system,thereby demonstrating the potential and effectiveness of the system for industrial applications.
基金This project was supported by the National Natural Science Foundation (No. 69831020).
文摘A scheme for general purposed FDTD visual scientific computing software is introduced in this paper using object-oriented design (OOD) method. By abstracting the parameters of FDTD grids to an individual class and separating from the iteration procedure, the visual software can be adapted to more comprehensive computing problems. Real-time gray degree graphic and wave curve of the results can be achieved using DirectX technique. The special difference equation and data structure in dispersive medium are considered, and the peculiarity of parameters in perfectly matched layer are also discussed.
文摘This work presents a new application for the Hierarchical Function Expansion Method for the solution of the Navier-Stokes equations for compressible fluids in two dimensions and in high velocity. This method is based on the finite elements method using the Petrov-Galerkin formulation, know as SUPG (Streamline Upwind Petrov-Galerkin), applied with the expansion of the variables into hierarchical functions. To test and validate the numerical method proposed as well as the computational program developed simulations are performed for some cases whose theoretical solutions are known. These cases are the following: continuity test, stability and convergence test, temperature step problem, and several oblique shocks. The objective of the last cases is basically to verify the capture of the shock wave by the method developed. The results obtained in the simulations with the proposed method were good both qualitatively and quantitatively when compared with the theoretical solutions. This allows concluding that the objectives of this work are reached.
文摘As there is datum redundancy in tradition database and temporal database in existence and the quantities of temporal database are increasing fleetly. We put forward compress storage tactics for temporal datum which combine compress technology in existence in order to settle datum redundancy in the course of temporal datum storage and temporal datum of slow acting domain and momentary acting domain are accessed by using each from independence clock method and mutual clock method .We also bring forward strategy of gridding storage to resolve the problems of temporal datum rising rapidly.
基金Project supported by the Shanghai Leading Academic Discipline Project(Grant No.J50103)the National Natural Science Foundation of China(Grant No.61073049)
文摘The division operation is not frequent relatively in traditional applications, but it is increasingly indispensable and important in many modern applications. In this paper, the implementation of modified signed-digit (MSD) floating-point division using Newton-Raphson method on the system of ternary optical computer (TOC) is studied. Since the addition of MSD floating-point is carry-free and the digit width of the system of TOC is large, it is easy to deal with the enough wide data and transform the division operation into multiplication and addition operations. And using data scan and truncation the problem of digits expansion is effectively solved in the range of error limit. The division gets the good results and the efficiency is high. The instance of MSD floating-point division shows that the method is feasible.
基金supported by the Beijing Municipal Science and Technology Commission Research (No. Z171100005217001)the National Science and Technology Major Project (No. 2018ZX03001016)+4 种基金the Fundamental Research Funds for the Central Universities (No. 2018RC06)the National Key R&D Program of China (No. 2017YFC0112802)the Beijing Laboratory of Advanced Information Networksthe Beijing Key Laboratory of Network System Architecture and Convergencethe 111 project B17007
文摘Under the situations of energy dilemma, energy Internet has become one of the most important technologies in international academic and industrial areas. However, massive small data from users, which are too scattered and unsuitable for compression, can easily exhaust computational resources and lower random access possibility, thereby reducing system performance. Moreover, electric substations are sensitive to transmission latency of user data, such as controlling information. However, the traditional energy Internet usually could not meet requirements. Integrating mobile-edge computing makes energy Internet convenient for data acquisition,processing, management, and accessing. In this paper, we propose a novel framework for energy Internet to improve random access possibility and reduce transmission latency. This framework utilizes the local area network to collect data from users and makes conducting data compression for energy Internet possible. Simulation results show that this architecture can enhance random access possibility by a large margin and reduce transmission latency without extra energy consumption overhead.