期刊文献+
共找到22篇文章
< 1 2 >
每页显示 20 50 100
Design of Latency-Aware IoT Modules in Heterogeneous Fog-Cloud Computing Networks 被引量:2
1
作者 Syed Rizwan Hassan Ishtiaq Ahmad +3 位作者 Jamel Nebhen Ateeq Ur Rehman Muhammad Shafiq Jin-Ghoo Choi 《Computers, Materials & Continua》 SCIE EI 2022年第3期6057-6072,共16页
The modern paradigm of the Internet of Things(IoT)has led to a significant increase in demand for latency-sensitive applications in Fog-based cloud computing.However,such applications cannot meet strict quality of ser... The modern paradigm of the Internet of Things(IoT)has led to a significant increase in demand for latency-sensitive applications in Fog-based cloud computing.However,such applications cannot meet strict quality of service(QoS)requirements.The large-scale deployment of IoT requires more effective use of network infrastructure to ensure QoS when processing big data.Generally,cloud-centric IoT application deployment involves different modules running on terminal devices and cloud servers.Fog devices with different computing capabilities must process the data generated by the end device,so deploying latency-sensitive applications in a heterogeneous fog computing environment is a difficult task.In addition,when there is an inconsistent connection delay between the fog and the terminal device,the deployment of such applications becomes more complicated.In this article,we propose an algorithm that can effectively place application modules on network nodes while considering connection delay,processing power,and sensing data volume.Compared with traditional cloud computing deployment,we conducted simulations in iFogSim to confirm the effectiveness of the algorithm.The simulation results verify the effectiveness of the proposed algorithm in terms of end-to-end delay and network consumption.Therein,latency and execution time is insensitive to the number of sensors. 展开更多
关键词 IOT fog-cloud computing architecture module placement latency sensitive application resource aware placement
下载PDF
Compute Unified Device Architecture Implementation of Euler/Navier-Stokes Solver on Graphics Processing Unit Desktop Platform for 2-D Compressible Flows
2
作者 Zhang Jiale Chen Hongquan 《Transactions of Nanjing University of Aeronautics and Astronautics》 EI CSCD 2016年第5期536-545,共10页
Personal desktop platform with teraflops peak performance of thousands of cores is realized at the price of conventional workstations using the programmable graphics processing units(GPUs).A GPU-based parallel Euler/N... Personal desktop platform with teraflops peak performance of thousands of cores is realized at the price of conventional workstations using the programmable graphics processing units(GPUs).A GPU-based parallel Euler/Navier-Stokes solver is developed for 2-D compressible flows by using NVIDIA′s Compute Unified Device Architecture(CUDA)programming model in CUDA Fortran programming language.The techniques of implementation of CUDA kernels,double-layered thread hierarchy and variety memory hierarchy are presented to form the GPU-based algorithm of Euler/Navier-Stokes equations.The resulting parallel solver is validated by a set of typical test flow cases.The numerical results show that dozens of times speedup relative to a serial CPU implementation can be achieved using a single GPU desktop platform,which demonstrates that a GPU desktop can serve as a costeffective parallel computing platform to accelerate computational fluid dynamics(CFD)simulations substantially. 展开更多
关键词 graphics processing unit(GPU) GPU parallel computing compute unified device architecture(CUDA)Fortran finite volume method(FVM) acceleration
下载PDF
High-precision parallel computing model of solute transport based on GPU acceleration
3
作者 Shang-hong Zhang Rong-qi Zhang +2 位作者 Wen-da Li Xi-yan Yang Yang Zhou 《Journal of Hydrodynamics》 SCIE EI CSCD 2024年第1期202-212,共11页
The scenario simulation analysis of water environmental emergencies is very important for risk prevention and control,and emergency response.To quickly and accurately simulate the transport and diffusion process of hi... The scenario simulation analysis of water environmental emergencies is very important for risk prevention and control,and emergency response.To quickly and accurately simulate the transport and diffusion process of high-intensity pollutants during sudden environmental water pollution events,in this study,a high-precision pollution transport and diffusion model for unstructured grids based on Compute Unified Device Architecture(CUDA)is proposed.The finite volume method of a total variation diminishing limiter with the Kong proposed r-factor is used to reduce numerical diffusion and oscillation errors in the simulation of pollutants under sharp concentration conditions,and graphics processing unit acceleration technology is used to improve computational efficiency.The advection diffusion process of the model is verified numerically using two benchmark cases,and the efficiency of the model is evaluated using an engineering example.The results demonstrate that the model perform well in the simulation of material transport in the presence of sharp concentration.Additionally,it has high computational efficiency.The acceleration ratio is 46 times the single-thread acceleration effect of the original model.The efficiency of the accelerated model meet the requirements of an engineering application,and the rapid early warning and assessment of water pollution accidents is achieved. 展开更多
关键词 Pollution transport and diffusion model parallel computing Compute Unified Device architecture(CUDA) pollution event
原文传递
SOLVERS FOR SYSTEMS OF LARGE SPARSE LINEAR AND NONLINEAR EQUATIONS BASED ON MULTI-GPUS 被引量:3
4
作者 刘沙 钟诚文 陈效鹏 《Transactions of Nanjing University of Aeronautics and Astronautics》 EI 2011年第3期300-308,共9页
Numerical treatment of engineering application problems often eventually results in a solution of systems of linear or nonlinear equations.The solution process using digital computational devices usually takes tremend... Numerical treatment of engineering application problems often eventually results in a solution of systems of linear or nonlinear equations.The solution process using digital computational devices usually takes tremendous time due to the extremely large size encountered in most real-world engineering applications.So,practical solvers for systems of linear and nonlinear equations based on multi graphic process units(GPUs)are proposed in order to accelerate the solving process.In the linear and nonlinear solvers,the preconditioned bi-conjugate gradient stable(PBi-CGstab)method and the Inexact Newton method are used to achieve the fast and stable convergence behavior.Multi-GPUs are utilized to obtain more data storage that large size problems need. 展开更多
关键词 general purpose graphic process unit(GPGPU) compute unified device architecture(CUDA) system of linear equations system of nonlinear equations Inexact Newton method bi-conjugate gradient stable(Bi-CGstab)method
下载PDF
A GPU-Accelerated Discontinuous Galerkin Method for Solving Two-Dimensional Laminar Flows 被引量:2
5
作者 GAO Huanqin CHEN Hongquan +2 位作者 ZHANG Jiale XU Shengguan GAO Yukun 《Transactions of Nanjing University of Aeronautics and Astronautics》 EI CSCD 2022年第4期450-466,共17页
A graphics processing unit(GPU)-accelerated discontinuous Galerkin(DG)method is presented for solving two-dimensional laminar flows.The DG method is ported from central processing unit to GPU in a way of achieving GPU... A graphics processing unit(GPU)-accelerated discontinuous Galerkin(DG)method is presented for solving two-dimensional laminar flows.The DG method is ported from central processing unit to GPU in a way of achieving GPU speedup through programming under the compute unified device architecture(CUDA)model.The CUDA kernel subroutines are designed to meet with the requirement of high order computing of DG method.The corresponding data structures are constructed in component-wised manners and the thread hierarchy is manipulated in cell-wised or edge-wised manners associated with related integrals involved in solving laminar Navier-Stokes equations,in which the inviscid and viscous flux terms are computed by the local lax-Friedrichs scheme and the second scheme of Bassi&Rebay,respectively.A strong stability preserving Runge-Kutta scheme is then used for time marching of numerical solutions.The resulting GPU-accelerated DG method is first validated by the traditional Couette flow problems with different mesh sizes associated with different orders of approximation,which shows that the orders of convergence,as expected,can be achieved.The numerical simulations of the typical flows over a circular cylinder or a NACA 0012 airfoil are then carried out,and the results are further compared with the analytical solutions or available experimental and numerical values reported in the literature,as well as with a performance analysis of the developed code in terms of GPU speedups.This shows that the costs of computing time of the presented test cases are significantly reduced without losing accuracy,while impressive speedups up to 69.7 times are achieved by the present method in comparison to its CPU counterpart. 展开更多
关键词 discontinuous Galerkin GPU compute unified device architecture(CUDA) Navier-Stokes equation laminar flows
下载PDF
A two-stage CO-PSO minimum structure inversion using CUDA for extracting IP information from MT data 被引量:1
6
作者 董莉 李帝铨 江沸菠 《Journal of Central South University》 SCIE EI CAS CSCD 2018年第5期1195-1212,共18页
The study of induced polarization (IP) information extraction from magnetotelluric (MT) sounding data is of great and practical significance to the exploitation of deep mineral, oil and gas resources. The linear i... The study of induced polarization (IP) information extraction from magnetotelluric (MT) sounding data is of great and practical significance to the exploitation of deep mineral, oil and gas resources. The linear inversion method, which has been given priority in previous research on the IP information extraction method, has three main problems as follows: 1) dependency on the initial model, 2) easily falling into the local minimum, and 3) serious non-uniqueness of solutions. Taking the nonlinearity and nonconvexity of IP information extraction into consideration, a two-stage CO-PSO minimum structure inversion method using compute unified distributed architecture (CUDA) is proposed. On one hand, a novel Cauchy oscillation particle swarm optimization (CO-PSO) algorithm is applied to extract nonlinear IP information from MT sounding data, which is implemented as a parallel algorithm within CUDA computing architecture; on the other hand, the impact of the polarizability on the observation data is strengthened by introducing a second stage inversion process, and the regularization parameter is applied in the fitness function of PSO algorithm to solve the problem of multi-solution in inversion. The inversion simulation results of polarization layers in different strata of various geoelectric models show that the smooth models of resistivity and IP parameters can be obtained by the proposed algorithm, the results of which are relatively stable and accurate. The experiment results added with noise indicate that this method is robust to Gaussian white noise. Compared with the traditional PSO and GA algorithm, the proposed algorithm has more efficiency and better inversion results. 展开更多
关键词 Cauchy oscillation particle swarm optimization magnetotelluric sounding nonlinear inversion induced polarization (IP) information extraction compute unified distributed architecture (CUDA)
下载PDF
GPU based numerical simulation of core shooting process
7
作者 Yi-zhong Zhang Gao-chun Lu +3 位作者 Chang-jiang Ni Tao Jing Lin-long Yang Qin-fang Wu 《China Foundry》 SCIE 2017年第5期392-397,共6页
Core shooting process is the most widely used technique to make sand cores and it plays an important role in the quality of sand cores. Although numerical simulation can hopefully optimize the core shooting process, r... Core shooting process is the most widely used technique to make sand cores and it plays an important role in the quality of sand cores. Although numerical simulation can hopefully optimize the core shooting process, research on numerical simulation of the core shooting process is very limited. Based on a two-fluid model(TFM) and a kinetic-friction constitutive correlation, a program for 3D numerical simulation of the core shooting process has been developed and achieved good agreements with in-situ experiments. To match the needs of engineering applications, a graphics processing unit(GPU) has also been used to improve the calculation efficiency. The parallel algorithm based on the Compute Unified Device Architecture(CUDA) platform can significantly decrease computing time by multi-threaded GPU. In this work, the program accelerated by CUDA parallelization method was developed and the accuracy of the calculations was ensured by comparing with in-situ experimental results photographed by a high-speed camera. The design and optimization of the parallel algorithm were discussed. The simulation result of a sand core test-piece indicated the improvement of the calculation efficiency by GPU. The developed program has also been validated by in-situ experiments with a transparent core-box, a high-speed camera, and a pressure measuring system. The computing time of the parallel program was reduced by nearly 95% while the simulation result was still quite consistent with experimental data. The GPU parallelization method can successfully solve the problem of low computational efficiency of the 3D sand shooting simulation program, and thus the developed GPU program is appropriate for engineering applications. 展开更多
关键词 graphics processing unit (GPU) Compute Unified Device architecture (CUDA) PARALLELIZATION core shooting process
下载PDF
Hybrid domain multipactor prediction algorithm and its CUDA parallel implementation
8
作者 WU Peiyu XIE Yongjun +1 位作者 NIU Liqiang JIANG Haolin 《Journal of Systems Engineering and Electronics》 SCIE EI CSCD 2020年第6期1097-1104,共8页
Based on the finite element method(FEM)in the frequency domain and particle-in-cell approach in the time domain,a hybrid domain multipactor threshold prediction algorithm is proposed in this paper.The proposed algorit... Based on the finite element method(FEM)in the frequency domain and particle-in-cell approach in the time domain,a hybrid domain multipactor threshold prediction algorithm is proposed in this paper.The proposed algorithm has the advantages of the frequency domain and the time domain algorithms at the same time in terms of high computational accuracy and considerable computational efficiency.In addition,the compute unified device architecture(CUDA)acceleration technique also can be employed to further enhance its simulation efficiency.Numerical examples are carried out to demonstrate the effectiveness of the proposed algorithm.The results indicate that the multipactor threshold can be accurately predicted and the computational efficiency can be improved. 展开更多
关键词 compute unified device architecture(CUDA) finite element method(FEM) hybrid domain multipactor threshold prediction particle-in-cell(PIC)
下载PDF
Multi-relaxation-time lattice Boltzmann simulations of lid driven flows using graphics processing unit
9
作者 Chenggong LI J.P.Y.MAA 《Applied Mathematics and Mechanics(English Edition)》 SCIE EI CSCD 2017年第5期707-722,共16页
Large eddy simulation (LES) using the Smagorinsky eddy viscosity model is added to the two-dimensional nine velocity components (D2Q9) lattice Boltzmann equation (LBE) with multi-relaxation-time (MRT) to simul... Large eddy simulation (LES) using the Smagorinsky eddy viscosity model is added to the two-dimensional nine velocity components (D2Q9) lattice Boltzmann equation (LBE) with multi-relaxation-time (MRT) to simulate incompressible turbulent cavity flows with the Reynolds numbers up to 1 × 10^7. To improve the computation efficiency of LBM on the numerical simulations of turbulent flows, the massively parallel computing power from a graphic processing unit (GPU) with a computing unified device architecture (CUDA) is introduced into the MRT-LBE-LES model. The model performs well, compared with the results from others, with an increase of 76 times in computation efficiency. It appears that the higher the Reynolds numbers is, the smaller the Smagorinsky constant should be, if the lattice number is fixed. Also, for a selected high Reynolds number and a selected proper Smagorinsky constant, there is a minimum requirement for the lattice number so that the Smagorinsky eddy viscosity will not be excessively large. 展开更多
关键词 large eddy simulation (LES) multi-relaxation-time (MRT) lattice Boltzmann equation (LBE) two-dimensional nine velocity components (D2Q9) Smagorinskymodel graphic processing unit (GPU) computing unified device architecture (CUDA)
下载PDF
Graphic Processing Unit-Accelerated Neural Network Model for Biological Species Recognition
10
作者 温程璐 潘伟 +1 位作者 陈晓熹 祝青园 《Journal of Donghua University(English Edition)》 EI CAS 2012年第1期5-8,共4页
A graphic processing unit (GPU)-accelerated biological species recognition method using partially connected neural evolutionary network model is introduced in this paper. The partial connected neural evolutionary netw... A graphic processing unit (GPU)-accelerated biological species recognition method using partially connected neural evolutionary network model is introduced in this paper. The partial connected neural evolutionary network adopted in the paper can overcome the disadvantage of traditional neural network with small inputs. The whole image is considered as the input of the neural network, so the maximal features can be kept for recognition. To speed up the recognition process of the neural network, a fast implementation of the partially connected neural network was conducted on NVIDIA Tesla C1060 using the NVIDIA compute unified device architecture (CUDA) framework. Image sets of eight biological species were obtained to test the GPU implementation and counterpart serial CPU implementation, and experiment results showed GPU implementation works effectively on both recognition rate and speed, and gained 343 speedup over its counterpart CPU implementation. Comparing to feature-based recognition method on the same recognition task, the method also achieved an acceptable correct rate of 84.6% when testing on eight biological species. 展开更多
关键词 graphic processing unit(GPU) compute unified device architecture (CUDA) neural network species recognition
下载PDF
Machine Recognition of Plan Typologies: Shotgun and Foursquare
11
作者 Amanda Green Frank Jacobus +1 位作者 Jay McCormack Josh Hartung 《Computer Technology and Application》 2012年第1期24-31,共8页
The evolution of expert and knowledge-based systems in architecture requires the gradual population of building specific databases. Often these databases are slow to evolve due to the time consuming nature of effectiv... The evolution of expert and knowledge-based systems in architecture requires the gradual population of building specific databases. Often these databases are slow to evolve due to the time consuming nature of effectively categorizing building features in a meaningful way that allows for retrieval and reuse. New advances in artificial intelligence such as Hierarchical Temporal Memory (HTM) have the potential to make the construction of these databases more realistic in the near future. Based on an emerging theory of human neurological function, HTMs excel at ambiguous pattern recognition. This paper includes a first experiment using HTMs for learning and recognizing patterns in the form of two distinct American house plan typologies, and further tests the relationship of HTM's recognition tendencies in alternate house plan types. Results from the experiment indicate that HTMs develop a similar storage of quality to humans and are therefore a promising option for capturing multi-modal information in future design automation efforts. 展开更多
关键词 Hierarchical temporal memory (HTM) machine learning artificial intelligence architectural computation.
下载PDF
An enhanced GPU reduction at the warp-level
12
作者 Hou Neng He Fazhi Zhou Yi 《Computer Aided Drafting,Design and Manufacturing》 2016年第2期43-52,共10页
In recent years, graphical processing unit (GPU)-accelerated intelligent algorithms have been widely utilized for solving combination optimization problems, which are NP-hard, These intelligent algorithms involves a... In recent years, graphical processing unit (GPU)-accelerated intelligent algorithms have been widely utilized for solving combination optimization problems, which are NP-hard, These intelligent algorithms involves a common operation, namely reduction, in which the best suitable candidate solution in the neighborhood is selected. As one of the main procedures, it is necessary to optimize the reduction on the GPU. In this paper, we propose an enhanced warp-based reduction on the GPU. Compared with existing block-based reduction methods, our method exploit efficiently the potential of implementation at warp level, which better matches the characteristics of current GPU architecture. Firstly, in order to improve the global memory access performance, the vectoring accessing is utilized. Secondly, at the level of thread block reduction, an enhanced warp-based reduction on the shared memory are presented to form partial results. Thirdly, for the configuration of the number of thread blocks, the number of thread blocks can be obtained by maximizing the size of thread block and the maximum size of threads per stream multi-processor on GPU. Finally, the proposed method is evaluated on three generations of NVIDIA GPUs with the better performances than previous methods. 展开更多
关键词 REDUCTION graphical processing unit computing unified device architecture warp-level reduction
下载PDF
High-Performance Flow Classification of Big Data Using Hybrid CPU-GPU Clusters of Cloud Environments
13
作者 Azam Fazel-Najafabadi Mahdi Abbasi +5 位作者 Hani H.Attar Ayman Amer Amir Taherkordi Azad Shokrollahi Mohammad R.Khosravi Ahmed A.Solyman 《Tsinghua Science and Technology》 SCIE EI CAS CSCD 2024年第4期1118-1137,共20页
The network switches in the data plane of Software Defined Networking (SDN) are empowered by an elementary process, in which enormous number of packets which resemble big volumes of data are classified into specific f... The network switches in the data plane of Software Defined Networking (SDN) are empowered by an elementary process, in which enormous number of packets which resemble big volumes of data are classified into specific flows by matching them against a set of dynamic rules. This basic process accelerates the processing of data, so that instead of processing singular packets repeatedly, corresponding actions are performed on corresponding flows of packets. In this paper, first, we address limitations on a typical packet classification algorithm like Tuple Space Search (TSS). Then, we present a set of different scenarios to parallelize it on different parallel processing platforms, including Graphics Processing Units (GPUs), clusters of Central Processing Units (CPUs), and hybrid clusters. Experimental results show that the hybrid cluster provides the best platform for parallelizing packet classification algorithms, which promises the average throughput rate of 4.2 Million packets per second (Mpps). That is, the hybrid cluster produced by the integration of Compute Unified Device Architecture (CUDA), Message Passing Interface (MPI), and OpenMP programming model could classify 0.24 million packets per second more than the GPU cluster scheme. Such a packet classifier satisfies the required processing speed in the programmable network systems that would be used to communicate big medical data. 展开更多
关键词 OPENMP Compute Unified Device architecture(CUDA) Message Passing Interface(MPI) packet classification medical data tuple space algorithm Graphics Processing Unit(GPU)cluster
原文传递
A Cloud Service Architecture for Analyzing Big Monitoring Data 被引量:3
14
作者 Samneet Singh Yan Liu 《Tsinghua Science and Technology》 SCIE EI CAS CSCD 2016年第1期55-70,共16页
Cloud monitoring is of a source of big data that are constantly produced from traces of infrastructures,platforms, and applications. Analysis of monitoring data delivers insights of the system's workload and usage pa... Cloud monitoring is of a source of big data that are constantly produced from traces of infrastructures,platforms, and applications. Analysis of monitoring data delivers insights of the system's workload and usage pattern and ensures workloads are operating at optimum levels. The analysis process involves data query and extraction, data analysis, and result visualization. Since the volume of monitoring data is big, these operations require a scalable and reliable architecture to extract, aggregate, and analyze data in an arbitrary range of granularity. Ultimately, the results of analysis become the knowledge of the system and should be shared and communicated. This paper presents our cloud service architecture that explores a search cluster for data indexing and query. We develop REST APIs that the data can be accessed by different analysis modules. This architecture enables extensions to integrate with software frameworks of both batch processing(such as Hadoop) and stream processing(such as Spark) of big data. The analysis results are structured in Semantic Media Wiki pages in the context of the monitoring data source and the analysis process. This cloud architecture is empirically assessed to evaluate its responsiveness when processing a large set of data records under node failures. 展开更多
关键词 cloud computing REST API big data software architecture semantic web
原文传递
HXPY: A High-Performance Data Processing Package for Financial Time-Series Data
15
作者 郭家栋 彭靖姝 +1 位作者 苑航 倪明选 《Journal of Computer Science & Technology》 SCIE EI CSCD 2023年第1期3-24,共22页
A tremendous amount of data has been generated by global financial markets everyday,and such time-series data needs to be analyzed in real time to explore its potential value.In recent years,we have witnessed the succ... A tremendous amount of data has been generated by global financial markets everyday,and such time-series data needs to be analyzed in real time to explore its potential value.In recent years,we have witnessed the successful adoption of machine learning models on financial data,where the importance of accuracy and timeliness demands highly effective computing frameworks.However,traditional financial time-series data processing frameworks have shown performance degradation and adaptation issues,such as the outlier handling with stock suspension in Pandas and TA-Lib.In this paper,we propose HXPY,a high-performance data processing package with a C++/Python interface for financial time-series data.HXPY supports miscellaneous acceleration techniques such as the streaming algorithm,the vectorization instruction set,and memory optimization,together with various functions such as time window functions,group operations,down-sampling operations,cross-section operations,row-wise or column-wise operations,shape transformations,and alignment functions.The results of benchmark and incremental analysis demonstrate the superior performance of HXPY compared with its counterparts.From MiBs to GiBs data,HXPY significantly outperforms other in-memory dataframe computing rivals even up to hundreds of times. 展开更多
关键词 dataframe time-series data SIMD(single instruction multiple data) CUDA(Compute Unified Device architecture)
原文传递
基于CUDA的JPCG并行算法求解三维DDA方程组 被引量:1
16
作者 王占学 杨军 +1 位作者 倪克松 甯尤军 《岩石力学与工程学报》 EI CAS CSCD 北大核心 2020年第6期1231-1241,共11页
非连续变形分析(discontinuous deformation analysis,DDA)方法已被广泛应用于岩土工程领域。不同于二维DDA,三维DDA更具备分析节理岩体变形和稳定性实际问题的能力。三维DDA块体间接触的复杂化,未知数规模的大幅增加,以及程序中数据和... 非连续变形分析(discontinuous deformation analysis,DDA)方法已被广泛应用于岩土工程领域。不同于二维DDA,三维DDA更具备分析节理岩体变形和稳定性实际问题的能力。三维DDA块体间接触的复杂化,未知数规模的大幅增加,以及程序中数据和内存的管理,对总体平衡方程组的求解提出了更高的稳定性和效率要求。对于原DDA程序中采用的超松弛(successive over-relaxation,SOR)求解算法,当超松弛因子选取不合适时,会造成方程组求解的不收敛。基于GPU,采用compute unified device architecture(CUDA)并行计算架构,实现了三维DDA总体平衡方程组的雅可比预处理共轭梯度法(jacobi-preconditioned conjugate gradient,JPCG)并行求解,通过算例展示了JPCG算法与GPU技术相结合的加速效果。相较于原有串行SOR算法,不仅避免了超松弛因子选取对求解收敛性的影响,而且提高了求解效率,为采用三维DDA求解实际岩石力学与工程问题创造了有利条件。 展开更多
关键词 岩石力学 三维非连续变形分析方法 compute unified device architecture(CUDA) 雅可比预处理共轭梯度法 超松弛算法
原文传递
Hybrid Parallel Bundle Adjustment for 3D Scene Reconstruction with Massive Points 被引量:4
17
作者 刘鑫 高伟 胡占义 《Journal of Computer Science & Technology》 SCIE EI CSCD 2012年第6期1269-1280,共12页
Bundle adjustment (BA) is a crucial but time consuming step in 3D reconstruction. In this paper, we intend to tackle a special class of BA problems where the reconstructed 3D points are much more numerous than the c... Bundle adjustment (BA) is a crucial but time consuming step in 3D reconstruction. In this paper, we intend to tackle a special class of BA problems where the reconstructed 3D points are much more numerous than the camera parameters, called Massive-Points BA (MPBA) problems. This is often the case when high-resolution images are used. We present a design and implementation of a new bundle adjustment algorithm for efficiently solving the MPBA problems. The use of hardware parallelism, the multi-core CPUs as well as GPUs, is explored. By careful memory-usage design, the graphic-memory limitation is effectively alleviated. Several modern acceleration strategies for bundle adjustment, such as the mixed-precision arithmetics, the embedded point iteration, and the preconditioned conjugate gradients, are explored and compared. By using several high-resolution image datasets, we generate a variety of MFBA problems, with which the performance of five bundle adjustment algorithms are evaluated. The experimental results show that our algorithm is up to 40 times faster than classical Sparse Bundle Adjustment, while maintaining comparable precision. 展开更多
关键词 sparse bundle adjustment GPU compute unified device architecture structure from motion
原文传递
Fast OBJ file importing and parsing in CUDA 被引量:2
18
作者 Aidan L.Possemiers Ickjai Lee 《Computational Visual Media》 2015年第3期229-238,共10页
Alias – Wavefront OBJ meshes are a common text file type for transferring 3D mesh data between applications made by different vendors.However, as the mesh complexity gets higher and denser, the files become larger an... Alias – Wavefront OBJ meshes are a common text file type for transferring 3D mesh data between applications made by different vendors.However, as the mesh complexity gets higher and denser, the files become larger and slower to import.This paper explores the use of GPUs to accelerate the importing and parsing of OBJ files by studying file read-time, runtime, and load resistance. We propose a new method of reading and parsing that circumvents GPU architecture limitations and improves performance, seeing the new GPU method outperforms CPU methods with a 6×– 8× speedup. When running on a heavily loaded system, the new method only received an 80% performance hit, compared to the160% that the CPU methods received. The loaded GPU speedup compared to unloaded CPU methods was3.5×, and, when compared to loaded CPU methods,8×. These results demonstrate that the time is right for further research into the use of data-parallel GPU acceleration beyond that of computer graphics and high performance computing. 展开更多
关键词 PARSING OBJ vertex buffer object(VBO) general-purpose programming on the graphics processing unit(GPGPU) compute unified device architecture(CUDA)
原文传递
High-performance solutions of geographically weighted regression in R 被引量:1
19
作者 Binbin Lu Yigong Hu +4 位作者 Daisuke Murakami Chris Brunsdon Alexis Comber Martin Charlton Paul Harris 《Geo-Spatial Information Science》 SCIE EI CSCD 2022年第4期536-549,共14页
As an established spatial analytical tool,Geographically Weighted Regression(GWR)has been applied across a variety of disciplines.However,its usage can be challenging for large datasets,which are increasingly prevalen... As an established spatial analytical tool,Geographically Weighted Regression(GWR)has been applied across a variety of disciplines.However,its usage can be challenging for large datasets,which are increasingly prevalent in today’s digital world.In this study,we propose two high-performance R solutions for GWR via Multi-core Parallel(MP)and Compute Unified Device Architecture(CUDA)techniques,respectively GWR-MP and GWR-CUDA.We compared GWR-MP and GWR-CUDA with three existing solutions available in Geographically Weighted Models(GWmodel),Multi-scale GWR(MGWR)and Fast GWR(FastGWR).Results showed that all five solutions perform differently across varying sample sizes,with no single solution a clear winner in terms of computational efficiency.Specifically,solutions given in GWmodel and MGWR provided acceptable computational costs for GWR studies with a relatively small sample size.For a large sample size,GWR-MP and FastGWR provided coherent solutions on a Personal Computer(PC)with a common multi-core configuration,GWR-MP provided more efficient computing capacity for each core or thread than FastGWR.For cases when the sample size was very large,and for these cases only,GWR-CUDA provided the most efficient solution,but should note its I/O cost with small samples.In summary,GWR-MP and GWR-CUDA provided complementary high-performance R solutions to existing ones,where for certain data-rich GWR studies,they should be preferred. 展开更多
关键词 Non-stationarity big data parallel computing Compute Unified Device architecture(CUDA) Geographically Weighted models(GWmodel)
原文传递
Implementation of the moving particle semi-implicit method on GPU 被引量:2
20
作者 ZHU XiaoSong CHENG Liang +1 位作者 LU Lin TENG Bin 《Science China(Physics,Mechanics & Astronomy)》 SCIE EI CAS 2011年第3期523-532,共10页
The Moving Particle Semi-implicit (MPS) method performs well in simulating violent free surface flow and hence becomes popular in the area of fluid flow simulation. However, the implementations of searching neighbouri... The Moving Particle Semi-implicit (MPS) method performs well in simulating violent free surface flow and hence becomes popular in the area of fluid flow simulation. However, the implementations of searching neighbouring particles and solving the large sparse matrix equations (Poisson-type equation) are very time-consuming. In order to utilize the tremendous power of parallel computation of Graphics Processing Units (GPU), this study has developed a GPU-based MPS model employing the Compute Unified Device Architecture (CUDA) on NVIDIA GTX 280. The efficient neighbourhood particle searching is done through an indirect method and the Poisson-type pressure equation is solved by the Bi-Conjugate Gradient (BiCG) method. Four different optimization levels for the present general parallel GPU-based MPS model are demonstrated. In addition, the elaborate optimization of GPU code is also discussed. A benchmark problem of dam-breaking flow is simulated using both codes of the present GPU-based MPS and the original CPU-based MPS. The comparisons between them show that the GPU-based MPS model outperforms 26 times the traditional CPU model. 展开更多
关键词 moving particle semi-implicit method (MPS) graphics processing units (GPU) compute unified device architecture (CUDA) neighbouring particle searching free surface flow
原文传递
上一页 1 2 下一页 到第
使用帮助 返回顶部