Over the past decade, Graphics Processing Units (GPUs) have revolutionized high-performance computing, playing pivotal roles in advancing fields like IoT, autonomous vehicles, and exascale computing. Despite these adv...Over the past decade, Graphics Processing Units (GPUs) have revolutionized high-performance computing, playing pivotal roles in advancing fields like IoT, autonomous vehicles, and exascale computing. Despite these advancements, efficiently programming GPUs remains a daunting challenge, often relying on trial-and-error optimization methods. This paper introduces an optimization technique for CUDA programs through a novel Data Layout strategy, aimed at restructuring memory data arrangement to significantly enhance data access locality. Focusing on the dynamic programming algorithm for chained matrix multiplication—a critical operation across various domains including artificial intelligence (AI), high-performance computing (HPC), and the Internet of Things (IoT)—this technique facilitates more localized access. We specifically illustrate the importance of efficient matrix multiplication in these areas, underscoring the technique’s broader applicability and its potential to address some of the most pressing computational challenges in GPU-accelerated applications. Our findings reveal a remarkable reduction in memory consumption and a substantial 50% decrease in execution time for CUDA programs utilizing this technique, thereby setting a new benchmark for optimization in GPU computing.展开更多
The 3D reconstruction pipeline uses the Bundle Adjustment algorithm to refine the camera and point parameters. The Bundle Adjustment algorithm is a compute-intensive algorithm, and many researchers have improved its p...The 3D reconstruction pipeline uses the Bundle Adjustment algorithm to refine the camera and point parameters. The Bundle Adjustment algorithm is a compute-intensive algorithm, and many researchers have improved its performance by implementing the algorithm on GPUs. In the previous research work, “Improving Accuracy and Computational Burden of Bundle Adjustment Algorithm using GPUs,” the authors demonstrated first the Bundle Adjustment algorithmic performance improvement by reducing the mean square error using an additional radial distorting parameter and explicitly computed analytical derivatives and reducing the computational burden of the Bundle Adjustment algorithm using GPUs. The naïve implementation of the CUDA code, a speedup of 10× for the largest dataset of 13,678 cameras, 4,455,747 points, and 28,975,571 projections was achieved. In this paper, we present the optimization of the Bundle Adjustment algorithm CUDA code on GPUs to achieve higher speedup. We propose a new data memory layout for the parameters in the Bundle Adjustment algorithm, resulting in contiguous memory access. We demonstrate that it improves the memory throughput on the GPUs, thereby improving the overall performance. We also demonstrate an increase in the computational throughput of the algorithm by optimizing the CUDA kernels to utilize the GPU resources effectively. A comparative performance study of explicitly computing an algorithm parameter versus using the Jacobians instead is presented. In the previous work, the Bundle Adjustment algorithm failed to converge for certain datasets due to several block matrices of the cameras in the augmented normal equation, resulting in rank-deficient matrices. In this work, we identify the cameras that cause rank-deficient matrices and preprocess the datasets to ensure the convergence of the BA algorithm. Our optimized CUDA implementation achieves convergence of the Bundle Adjustment algorithm in around 22 seconds for the largest dataset compared to 654 seconds for the sequential implementation, resulting in a speedup of 30×. Our optimized CUDA implementation presented in this paper has achieved a 3× speedup for the largest dataset compared to the previous naïve CUDA implementation.展开更多
Refactoring tools, whether fully automated or semi-automated, are essential components of the software development life cycle. As software libraries and frameworks evolve over time, it’s crucial for programs utilizin...Refactoring tools, whether fully automated or semi-automated, are essential components of the software development life cycle. As software libraries and frameworks evolve over time, it’s crucial for programs utilizing them to also evolve to remain compatible with modern advancements. Take, for example, NVIDIA CUDA’s platform for general-purpose GPU programming. Embracing the more contemporary unified memory architecture offers several benefits, such as simplifying program source code, reducing bugs stemming from manual memory management between host and device memory, and optimizing memory transfer through automated memory handling. This paper describes our development of a refactoring tool based on Clang’s Libtooling to facilitate this transition automatically, thereby relieving developers from the burden and risks associated with manually refactoring large code bases.展开更多
In this work,Al-4.5wt.%Cu was selected as the research object,and a phase field-lattice Boltzmann method(PF-LBM)model based on compute unified device architecture(CUDA)was established to solve the problem of low seria...In this work,Al-4.5wt.%Cu was selected as the research object,and a phase field-lattice Boltzmann method(PF-LBM)model based on compute unified device architecture(CUDA)was established to solve the problem of low serial computing efficiency of a traditional CPU and achieve significant acceleration.This model was used to explore the evolution of dendrite growth under natural convection.Through the study of the tip velocities,it is found that the growth of the dendrite arms at the bottom is inhibited while the growth of the dendrite arms at the top is promoted by natural convection.In addition,research on the inclined dendrite under natural convection was conducted.It is observed that there is a deviation between the actual growth direction and the preferred angle of the inclined dendrite.With the increase of the preferred angle of the seed,the difference between the actual growth direction and the initial preferred angle of the inclined dendrite shows a trend of increasing at first and then decreasing.In the simulation area,the relative deflection directions of the primary dendrite arms in the top right corner and the bottom left corner of the same dendrite are almost counterclockwise,while the relative deflection directions of the other two primary dendrite arms are clockwise.展开更多
文摘Over the past decade, Graphics Processing Units (GPUs) have revolutionized high-performance computing, playing pivotal roles in advancing fields like IoT, autonomous vehicles, and exascale computing. Despite these advancements, efficiently programming GPUs remains a daunting challenge, often relying on trial-and-error optimization methods. This paper introduces an optimization technique for CUDA programs through a novel Data Layout strategy, aimed at restructuring memory data arrangement to significantly enhance data access locality. Focusing on the dynamic programming algorithm for chained matrix multiplication—a critical operation across various domains including artificial intelligence (AI), high-performance computing (HPC), and the Internet of Things (IoT)—this technique facilitates more localized access. We specifically illustrate the importance of efficient matrix multiplication in these areas, underscoring the technique’s broader applicability and its potential to address some of the most pressing computational challenges in GPU-accelerated applications. Our findings reveal a remarkable reduction in memory consumption and a substantial 50% decrease in execution time for CUDA programs utilizing this technique, thereby setting a new benchmark for optimization in GPU computing.
文摘The 3D reconstruction pipeline uses the Bundle Adjustment algorithm to refine the camera and point parameters. The Bundle Adjustment algorithm is a compute-intensive algorithm, and many researchers have improved its performance by implementing the algorithm on GPUs. In the previous research work, “Improving Accuracy and Computational Burden of Bundle Adjustment Algorithm using GPUs,” the authors demonstrated first the Bundle Adjustment algorithmic performance improvement by reducing the mean square error using an additional radial distorting parameter and explicitly computed analytical derivatives and reducing the computational burden of the Bundle Adjustment algorithm using GPUs. The naïve implementation of the CUDA code, a speedup of 10× for the largest dataset of 13,678 cameras, 4,455,747 points, and 28,975,571 projections was achieved. In this paper, we present the optimization of the Bundle Adjustment algorithm CUDA code on GPUs to achieve higher speedup. We propose a new data memory layout for the parameters in the Bundle Adjustment algorithm, resulting in contiguous memory access. We demonstrate that it improves the memory throughput on the GPUs, thereby improving the overall performance. We also demonstrate an increase in the computational throughput of the algorithm by optimizing the CUDA kernels to utilize the GPU resources effectively. A comparative performance study of explicitly computing an algorithm parameter versus using the Jacobians instead is presented. In the previous work, the Bundle Adjustment algorithm failed to converge for certain datasets due to several block matrices of the cameras in the augmented normal equation, resulting in rank-deficient matrices. In this work, we identify the cameras that cause rank-deficient matrices and preprocess the datasets to ensure the convergence of the BA algorithm. Our optimized CUDA implementation achieves convergence of the Bundle Adjustment algorithm in around 22 seconds for the largest dataset compared to 654 seconds for the sequential implementation, resulting in a speedup of 30×. Our optimized CUDA implementation presented in this paper has achieved a 3× speedup for the largest dataset compared to the previous naïve CUDA implementation.
文摘Refactoring tools, whether fully automated or semi-automated, are essential components of the software development life cycle. As software libraries and frameworks evolve over time, it’s crucial for programs utilizing them to also evolve to remain compatible with modern advancements. Take, for example, NVIDIA CUDA’s platform for general-purpose GPU programming. Embracing the more contemporary unified memory architecture offers several benefits, such as simplifying program source code, reducing bugs stemming from manual memory management between host and device memory, and optimizing memory transfer through automated memory handling. This paper describes our development of a refactoring tool based on Clang’s Libtooling to facilitate this transition automatically, thereby relieving developers from the burden and risks associated with manually refactoring large code bases.
基金supported by the National Natural Science Foundation of China(Grant Nos.52161002,51661020 and 11364024)the Postdoctoral Science Foundation of China(Grant No.2014M560371)the Funds for Distinguished Young Scientists of Lanzhou University of Technology,China(Grant No.J201304).
文摘In this work,Al-4.5wt.%Cu was selected as the research object,and a phase field-lattice Boltzmann method(PF-LBM)model based on compute unified device architecture(CUDA)was established to solve the problem of low serial computing efficiency of a traditional CPU and achieve significant acceleration.This model was used to explore the evolution of dendrite growth under natural convection.Through the study of the tip velocities,it is found that the growth of the dendrite arms at the bottom is inhibited while the growth of the dendrite arms at the top is promoted by natural convection.In addition,research on the inclined dendrite under natural convection was conducted.It is observed that there is a deviation between the actual growth direction and the preferred angle of the inclined dendrite.With the increase of the preferred angle of the seed,the difference between the actual growth direction and the initial preferred angle of the inclined dendrite shows a trend of increasing at first and then decreasing.In the simulation area,the relative deflection directions of the primary dendrite arms in the top right corner and the bottom left corner of the same dendrite are almost counterclockwise,while the relative deflection directions of the other two primary dendrite arms are clockwise.