A new algorithm is proposed for restoring disocclusion regions in depth-image-based rendering (DIBR) warped images. Current solutions include layered depth image (LDI), pre-filtering methods, and post-processing m...A new algorithm is proposed for restoring disocclusion regions in depth-image-based rendering (DIBR) warped images. Current solutions include layered depth image (LDI), pre-filtering methods, and post-processing methods. The LDI is complicated, and pre-filtering of depth images causes noticeable geometrical distortions in cases of large baseline warping. This paper presents a depth-aided inpainting method which inherits merits from Criminisi's inpainting algorithm. The proposed method features incorporation of a depth cue into texture estimation. The algorithm efficiently handles depth ambiguity by penalizing larger Lagrange multipliers of flling points closer to the warping position compared with the surrounding existing points. We perform morphological operations on depth images to accelerate the algorithm convergence, and adopt a luma-first strategy to adapt to various color sampling formats. Experiments on test multi-view sequence showed that our method has superiority in depth differentiation and geometrical loyalty in the restoration of warped images. Also, peak signal-to-noise ratio (PSNR) statistics on non-hole regions and whole image comparisons both compare favorably to those obtained by state of the art techniques.展开更多
Binaural rendering is of great interest to virtual reality and immersive media. Although humans can naturally use their two ears to perceive the spatial information contained in sounds, it is a challenging task for ma...Binaural rendering is of great interest to virtual reality and immersive media. Although humans can naturally use their two ears to perceive the spatial information contained in sounds, it is a challenging task for machines to achieve binaural rendering since the description of a sound field often requires multiple channels and even the metadata of the sound sources. In addition, the perceived sound varies from person to person even in the same sound field. Previous methods generally rely on individual-dependent head-related transferred function(HRTF)datasets and optimization algorithms that act on HRTFs. In practical applications, there are two major drawbacks to existing methods. The first is a high personalization cost, as traditional methods achieve personalized needs by measuring HRTFs. The second is insufficient accuracy because the optimization goal of traditional methods is to retain another part of information that is more important in perception at the cost of discarding a part of the information. Therefore, it is desirable to develop novel techniques to achieve personalization and accuracy at a low cost. To this end, we focus on the binaural rendering of ambisonic and propose 1) channel-shared encoder and channel-compared attention integrated into neural networks and 2) a loss function quantifying interaural level differences to deal with spatial information. To verify the proposed method, we collect and release the first paired ambisonic-binaural dataset and introduce three metrics to evaluate the content information and spatial information accuracy of the end-to-end methods. Extensive experimental results on the collected dataset demonstrate the superior performance of the proposed method and the shortcomings of previous methods.展开更多
With the development of virtual reality (VR) technology, more and more industries are beginning to integrate with VR technology. In response to the problem of not being able to directly render the lighting effect of C...With the development of virtual reality (VR) technology, more and more industries are beginning to integrate with VR technology. In response to the problem of not being able to directly render the lighting effect of Caideng in digital Caideng scenes, this article analyzes the lighting model. It combines it with the lighting effect of Caideng scenes to design an optimized lighting model algorithm that fuses the bidirectional transmission distribution function (BTDF) model. This algorithm can efficiently render the lighting effect of Caideng models in a virtual environment. And using image optimization processing methods, the immersive experience effect on the VR is enhanced. Finally, a Caideng roaming interactive system was designed based on this method. The results show that the frame rate of the system is stable during operation, maintained above 60 fps, and has a good immersive experience.展开更多
Background In recent years, with the rapid development of mobile Internet and Web3D technologies, a large number of web-based online 3D visualization applications have emerged. Web3D applications, including Web3D onli...Background In recent years, with the rapid development of mobile Internet and Web3D technologies, a large number of web-based online 3D visualization applications have emerged. Web3D applications, including Web3D online tourism, Web3D online architecture, Web3D online education environment, Web3D online medical care, and Web3D online shopping are examples of these applications that leverage 3D rendering on the web. These applications have pushed the boundaries of traditional web applications that use text, sound, image, video, and 2D animation as their main communication media, and resorted to 3D virtual scenes as the main interaction object, enabling a user experience that delivers a strong sense of immersion. This paper approached the emerging Web3D applications that generate stronger impacts on people's lives through “real-time rendering technology”, which is the core technology of Web3D. This paper discusses all the major 3D graphics APIs of Web3D and the well-known Web3D engines at home and abroad and classify the real-time rendering frameworks of Web3D applications into different categories. Results Finally, this study analyzed the specific demand posed by different fields to Web3D applications by referring to the representative Web3D applications in each particular field. Conclusions Our survey results show that Web3D applications based on real-time rendering have in-depth sectors of society and even family, which is a trend that has influence on every line of industry.展开更多
Volume visualization can not only illustrate overall distribution but also inner structure and it is an important approach for space environment research.Space environment simulation can produce several correlated var...Volume visualization can not only illustrate overall distribution but also inner structure and it is an important approach for space environment research.Space environment simulation can produce several correlated variables at the same time.However,existing compressed volume rendering methods only consider reducing the redundant information in a single volume of a specific variable,not dealing with the redundant information among these variables.For space environment volume data with multi-correlated variables,based on the HVQ-1d method we propose a further improved HVQ method by compositing variable-specific levels to reduce the redundant information among these variables.The volume data associated with each variable is divided into disjoint blocks of size 43 initially.The blocks are represented as two levels,a mean level and a detail level.The variable-specific mean levels and detail levels are combined respectively to form a larger global mean level and a larger global detail level.To both global levels,a splitting based on a principal component analysis is applied to compute initial codebooks.Then,LBG algorithm is conducted for codebook refinement and quantization.We further take advantage of progressive rendering based on GPU for real-time interactive visualization.Our method has been tested along with HVQ and HVQ-1d on high-energy proton flux volume data,including>5,>10,>30 and>50 MeV integrated proton flux.The results of our experiments prove that the method proposed in this paper pays the least cost of quality at compression,achieves a higher decompression and rendering speed compared with HVQ and provides satisficed fidelity while ensuring interactive rendering speed.展开更多
Point-based rendering is a common method widely used in point cloud rendering.It realizes rendering by turning the points into the base geometry.The critical step in point-based rendering is to set an appropriate rend...Point-based rendering is a common method widely used in point cloud rendering.It realizes rendering by turning the points into the base geometry.The critical step in point-based rendering is to set an appropriate rendering radius for the base geometry,usually calculated using the average Euclidean distance of the N nearest neighboring points to the rendered point.This method effectively reduces the appearance of empty spaces between points in rendering.However,it also causes the problem that the rendering radius of outlier points far away from the central region of the point cloud sequence could be large,which impacts the perceptual quality.To solve the above problem,we propose an algorithm for point-based point cloud rendering through outlier detection to optimize the perceptual quality of rendering.The algorithm determines whether the detected points are outliers using a combination of local and global geometric features.For the detected outliers,the minimum radius is used for rendering.We examine the performance of the proposed method in terms of both objective quality and perceptual quality.The experimental results show that the peak signal-to-noise ratio(PSNR)of the point cloud sequences is improved under all geometric quantization,and the PSNR improvement ratio is more evident in dense point clouds.Specifically,the PSNR of the point cloud sequences is improved by 3.6%on average compared with the original algorithm.The proposed method significantly improves the perceptual quality of the rendered point clouds and the results of ablation studies prove the feasibility and effectiveness of the proposed method.展开更多
Background Mixed reality(MR)video fusion systems merge video imagery with 3D scenes to make the scene more realistic and help users understand the video content and temporal–spatial correlation between them,reducing ...Background Mixed reality(MR)video fusion systems merge video imagery with 3D scenes to make the scene more realistic and help users understand the video content and temporal–spatial correlation between them,reducing the user′s cognitive load.MR video fusion are used in various applications;however,video fusion systems require powerful client machines because video streaming delivery,stitching,and rendering are computationally intensive.Moreover,huge bandwidth usage is another critical factor that affects the scalability of video-fusion systems.Methods Our framework proposes a fusion method for dynamically projecting video images into 3D models as textures.Results Several experiments on different metrics demonstrate the effectiveness of the proposed framework.Conclusions The framework proposed in this study can overcome client limitations by utilizing remote rendering.Furthermore,the framework we built is based on browsers.Therefore,the user can test the MR video fusion system with a laptop or tablet without installing any additional plug-ins or application programs.展开更多
Computer-aided diagnosis based on image color rendering promotes medical image analysis and doctor-patient communication by highlighting important information of medical diagnosis.To overcome the limitations of the co...Computer-aided diagnosis based on image color rendering promotes medical image analysis and doctor-patient communication by highlighting important information of medical diagnosis.To overcome the limitations of the color rendering method based on deep learning,such as poor model stability,poor rendering quality,fuzzy boundaries and crossed color boundaries,we propose a novel hinge-cross-entropy generative adversarial network(HCEGAN).The self-attention mechanism was added and improved to focus on the important information of the image.And the hinge-cross-entropy loss function was used to stabilize the training process of GAN models.In this study,we implement the HCEGAN model for image color rendering based on DIV2K and COCO datasets,and evaluate the results using SSIM and PSNR.The experimental results show that the proposed HCEGAN automatically re-renders images,significantly improves the quality of color rendering and greatly improves the stability of prior GAN models.展开更多
We demonstrated gold nanoclusters as color tunable emissive light converters for the application of white light emitting diodes (WLEDs). A blue LED providing 460 nm to excite gold nanoclusters mixed with UV curable ma...We demonstrated gold nanoclusters as color tunable emissive light converters for the application of white light emitting diodes (WLEDs). A blue LED providing 460 nm to excite gold nanoclusters mixed with UV curable material generates broad bandwidth emission at the visible range. Increasing the amount of gold nanoclusters, the correlated color temperature of WLEDs tuned from cold white to warm white, and also results in the variation of color rendering index (CRI). The highest CRI in the experiment is 92.展开更多
Ray casting algorithm can obtain a better quality image in volume rendering, however, it exists some problems, such as powerful computing capacity and slow rendering speed. How to improve the re-sampled speed is a key...Ray casting algorithm can obtain a better quality image in volume rendering, however, it exists some problems, such as powerful computing capacity and slow rendering speed. How to improve the re-sampled speed is a key to speed up the ray casting algorithm. An algorithm is introduced to reduce matrix computation by matrix transformation characteristics of re-sampling points in a two coordinate system. The projection of 3-D datasets on image plane is adopted to reduce the number of rays. Utilizing boundary box technique avoids the sampling in empty voxel. By extending the Bresenham algorithm to three dimensions, each re-sampling point is calculated. Experimental results show that a two to three-fold improvement in rendering speed using the optimized algorithm, and the similar image quality to traditional algorithm can be achieved. The optimized algorithm can produce the required quality images, thus reducing the total operations and speeding up the volume rendering.展开更多
A very-high color rendering index white organic light-emitting diode(WOLED) based on a simple structure was successfully fabricated. The optimized device exhibits a maximum total efficiency of 13.1 and 5.4 lm/W at 1,0...A very-high color rendering index white organic light-emitting diode(WOLED) based on a simple structure was successfully fabricated. The optimized device exhibits a maximum total efficiency of 13.1 and 5.4 lm/W at 1,000 cd/m2. A peak color rendering index of 90 and a relatively stable color during a wide range of luminance were obtained. In addition, it was demonstrated that the 4,40,400-tri(9-carbazoyl) triphenylamine host influenced strongly the performance of this WOLED.These results may be beneficial to the design of both material and device architecture for high-performance WOLED.展开更多
The visual fidelity of bleeding simulation in a surgical simulator is critical since it will affect not only the degree of visual realism,but also the user’s medical judgment and treatment in real-life settings.The c...The visual fidelity of bleeding simulation in a surgical simulator is critical since it will affect not only the degree of visual realism,but also the user’s medical judgment and treatment in real-life settings.The conventional marching cubes surface rendering algorithm provides excellent visual effect in rendering gushing blood,however,it is insufficient for blood flow,which is very common in surgical procedures,since in this case the rendered surface and depth textures of blood are rough.In this paper,we propose a new method called the mixed depth rendering for rendering blood flow in surgical simulation.A smooth height field is created to minimize the height difference between neighboring particles on the bleeding surface.The color and transparency of each bleeding area are determined by the number of bleeding particles,which is consistent with the real visual effect.In addition,there is no much extra computational cost.The rendering of blood flow in a variety of surgical scenarios shows that visual feedback is much improved.The proposed mixed depth rendering method is also used in a neurosurgery simulator that we developed.展开更多
Use of compressed mesh in parallel rendering architecture is still an unexplored area, the main challenge of which is to partition and sort the encoded mesh in compression-domain. This paper presents a mesh compressio...Use of compressed mesh in parallel rendering architecture is still an unexplored area, the main challenge of which is to partition and sort the encoded mesh in compression-domain. This paper presents a mesh compression scheme PRMC (Parallel Rendering based Mesh Compression) supplying encoded meshes that can be partitioned and sorted in parallel rendering system even in encoded-domain. First, we segment the mesh into submeshes and clip the submeshes’ boundary into Runs, and then piecewise compress the submeshes and Runs respectively. With the help of several auxiliary index tables, compressed submeshes and Runs can serve as rendering primitives in parallel rendering system. Based on PRMC, we design and implement a parallel rendering architecture. Compared with uncompressed representation, experimental results showed that PRMC meshes applied in cluster parallel rendering system can dramatically reduce the communication requirement.展开更多
森林的实时渲染及光照是视景系统中的一个难题.基于图像的渲染方法(IBR)由于渲染速度与模型复杂度无关,被广泛应用于场景重建.基于光流场(Light Field Rendering)的IBR技术,提出一种迭代投射算法来进行外形重建,实现了具有实时光影特征...森林的实时渲染及光照是视景系统中的一个难题.基于图像的渲染方法(IBR)由于渲染速度与模型复杂度无关,被广泛应用于场景重建.基于光流场(Light Field Rendering)的IBR技术,提出一种迭代投射算法来进行外形重建,实现了具有实时光影特征的森林效果.实验表明该算法结合了传统迭代、投射算法各自的优点,在质量和效率方面取得了平衡.展开更多
Since the 1980s, various techniques have been used in the field of medicine for the post-processing of medical imaging data from computed tomography (CT) and magnetic resonance (MR). They include multiplanar reformati...Since the 1980s, various techniques have been used in the field of medicine for the post-processing of medical imaging data from computed tomography (CT) and magnetic resonance (MR). They include multiplanar reformations (MPR), maximum intensity projection (MIP) and Volume Rendering (VR). This paper presents the prototype of a new means of post-processing radiological examinations such as CT and MR, a technique that, for the first time, provides photorealistic visualizations of the human body. This new procedure was inspired by the quality of images achieved by animation software such as programs used in the entertainment industry, particularly to produce animated films. Thus, the name: Cinematic Rendering. It is already foreseeable that this new method of depiction will quickly be incorporated into the set of instruments employed in socalled virtual anatomy (teaching anatomy through the use of radiological depictions of the human body via X-ray, CT and MR in addition to the use of computer animation programs designed especially for human anatomy). Its potential for medical applications will have to be evaluated by future scientific investigations.展开更多
Aiming to deal with the difficult issues of terrain data model simplification and crack disposal,the paper proposed an improved level of detail(LOD)terrain rendering algorithm,in which a variation coefficient of eleva...Aiming to deal with the difficult issues of terrain data model simplification and crack disposal,the paper proposed an improved level of detail(LOD)terrain rendering algorithm,in which a variation coefficient of elevation is introduced to express the undulation of topography.Then the coefficient is used to construct a node evaluation function in the terrain data model simplification step.Furthermore,an edge reduction strategy is combined with the improved restrictive quadtree segmentation to handle the crack problem.The experiment results demonstrated that the proposed method can reduce the amount of rendering triangles and enhance the rendering speed on the premise of ensuring the rendering effect compared with a traditional LOD algorithm.展开更多
Modern computer techniques have been in use for several years to generate three-dimensional visualizations of human anatomy. Very good 3-D computer models of the human body are now available and used routinely in anat...Modern computer techniques have been in use for several years to generate three-dimensional visualizations of human anatomy. Very good 3-D computer models of the human body are now available and used routinely in anatomy instruction. These techniques are subsumed under the heading “virtual anatomy” to distinguish them from the conventional study of anatomy entailing cadavers and anatomy textbooks. Moreover, other imaging procedures (X-ray, angiography, CT and MR) are also used in virtual anatomy instruction. A recently introduced three-dimensional post-processing technique named Cinematic Rendering now makes it possible to use the output of routine CT and MR examinations as the basis for highly photo-realistic 3-D depictions of human anatomy. We have installed Cinematic Rendering (enabled for stereoscopy) in a high-definition 8K 3-D projection space that accommodates an audience of 150. The space’s projection surface measures 16 × 9 meters;images can be projected on both the front wall and the floor. A game controller can be used to operate Cinematic Rendering software so that it can generate interactive real-time depictions of human anatomy on the basis of CT and MR data sets. This prototype installation was implemented without technical problems;in day-to-day, real-world use over a period of 22 months, there were no impairments of service due to software crashes or other technical problems. We are already employing this installation routinely for educational offerings open to the public, courses for students in the health professions, and (continuing) professional education units for medical interns, residents and specialists—in, so to speak, the dissecting theater of the future.展开更多
Levofloxacin (LOFX), which is well-known as an antibiotic medicament, was shown to be useful as a 452-nm blue emitter for white organic light-emitting diodes (OLEDs). In this paper, the fabricated white OLED conta...Levofloxacin (LOFX), which is well-known as an antibiotic medicament, was shown to be useful as a 452-nm blue emitter for white organic light-emitting diodes (OLEDs). In this paper, the fabricated white OLED contains a 452-nm blue emitting layer (thickness of 30 nm) with 1 wt% LOFX doped in CBP (4,4'-bis(carbazol-9-yl)biphenyl) host and a 584-nm orange emitting layer (thickness of 10 nm) with 0.8 wt% DCJTB (4-(dicyanomethylene)-2-tert-butyl-6-(1,1,7,7- tetramethyljulolidin-4-yl-vinyl)-4H-pyran) doped in CBE which are separated by a 20-nm-thick buffer layer of TPBi (2,2',2"-(benzene-1,3,5-triyl)-tri(1-phenyl-lH-benzimidazole). A high color rendering index (CRI) of 84.5 and CIE chromaticity coordinates of (0.33, 0.32), which is close to ideal white emission CIE (0.333, 0.333), are obtained at a bias voltage of 14 V. Taking into account that LOFX is less expensive and the synthesis and purification technologies of LOFX are mature, these results indicate that blue fluorescence emitting LOFX is useful for applications to white OLEDs although the maximum current efficiency and luminance are not high. The present paper is expected to become a milestone to using medical drug materials for OLEDs.展开更多
Laser scanning confocal endomicroscope(LSCEM)has emerged as an imaging modality which provides noninvasive,in vivo imaging of biological tissue on a microscopic scale.Scientific visualizations for LSCEM datasets captu...Laser scanning confocal endomicroscope(LSCEM)has emerged as an imaging modality which provides noninvasive,in vivo imaging of biological tissue on a microscopic scale.Scientific visualizations for LSCEM datasets captured by current imaging systems require these datasets to be fully acquired and brought to a separate rendering machine.To extend the features and capabilities of this modality,we propose a system which is capable of performing realtime visualization of LSCEM datasets.Using field-programmable gate arrays,our system performs three tasks in parallel:(1)automated control of dataset acquisition;(2)imaging-rendering system synchronization;and(3)realtime volume rendering of dynamic datasets.Through fusion of LSCEM imaging and volume rendering processes,acquired datasets can be visualized in realtime to provide an immediate perception of the image quality and biological conditions of the subject,further assisting in realtime cancer diagnosis.Subsequently,the imaging procedure can be improved for more accurate diagnosis and reduce the need for repeating the process due to unsatisfactory datasets.展开更多
基金Project supported by the National Natural Science Foundation of China (No 60802013)the Natural Science Foundation of Zhe-jiang Province, China (No Y106574)
文摘A new algorithm is proposed for restoring disocclusion regions in depth-image-based rendering (DIBR) warped images. Current solutions include layered depth image (LDI), pre-filtering methods, and post-processing methods. The LDI is complicated, and pre-filtering of depth images causes noticeable geometrical distortions in cases of large baseline warping. This paper presents a depth-aided inpainting method which inherits merits from Criminisi's inpainting algorithm. The proposed method features incorporation of a depth cue into texture estimation. The algorithm efficiently handles depth ambiguity by penalizing larger Lagrange multipliers of flling points closer to the warping position compared with the surrounding existing points. We perform morphological operations on depth images to accelerate the algorithm convergence, and adopt a luma-first strategy to adapt to various color sampling formats. Experiments on test multi-view sequence showed that our method has superiority in depth differentiation and geometrical loyalty in the restoration of warped images. Also, peak signal-to-noise ratio (PSNR) statistics on non-hole regions and whole image comparisons both compare favorably to those obtained by state of the art techniques.
基金supported in part by the National Natural Science Foundation of China (62176059, 62101136)。
文摘Binaural rendering is of great interest to virtual reality and immersive media. Although humans can naturally use their two ears to perceive the spatial information contained in sounds, it is a challenging task for machines to achieve binaural rendering since the description of a sound field often requires multiple channels and even the metadata of the sound sources. In addition, the perceived sound varies from person to person even in the same sound field. Previous methods generally rely on individual-dependent head-related transferred function(HRTF)datasets and optimization algorithms that act on HRTFs. In practical applications, there are two major drawbacks to existing methods. The first is a high personalization cost, as traditional methods achieve personalized needs by measuring HRTFs. The second is insufficient accuracy because the optimization goal of traditional methods is to retain another part of information that is more important in perception at the cost of discarding a part of the information. Therefore, it is desirable to develop novel techniques to achieve personalization and accuracy at a low cost. To this end, we focus on the binaural rendering of ambisonic and propose 1) channel-shared encoder and channel-compared attention integrated into neural networks and 2) a loss function quantifying interaural level differences to deal with spatial information. To verify the proposed method, we collect and release the first paired ambisonic-binaural dataset and introduce three metrics to evaluate the content information and spatial information accuracy of the end-to-end methods. Extensive experimental results on the collected dataset demonstrate the superior performance of the proposed method and the shortcomings of previous methods.
文摘With the development of virtual reality (VR) technology, more and more industries are beginning to integrate with VR technology. In response to the problem of not being able to directly render the lighting effect of Caideng in digital Caideng scenes, this article analyzes the lighting model. It combines it with the lighting effect of Caideng scenes to design an optimized lighting model algorithm that fuses the bidirectional transmission distribution function (BTDF) model. This algorithm can efficiently render the lighting effect of Caideng models in a virtual environment. And using image optimization processing methods, the immersive experience effect on the VR is enhanced. Finally, a Caideng roaming interactive system was designed based on this method. The results show that the frame rate of the system is stable during operation, maintained above 60 fps, and has a good immersive experience.
基金the Science and Technology Program of Educational Commission of Jiangxi Province,China(DA202104172)the Innovation and Entrepreneurship Course Program of Nanchang Hangkong University(KCPY1910)the Teaching Reform Research Program of Nanchang Hangkong University(JY21040).
文摘Background In recent years, with the rapid development of mobile Internet and Web3D technologies, a large number of web-based online 3D visualization applications have emerged. Web3D applications, including Web3D online tourism, Web3D online architecture, Web3D online education environment, Web3D online medical care, and Web3D online shopping are examples of these applications that leverage 3D rendering on the web. These applications have pushed the boundaries of traditional web applications that use text, sound, image, video, and 2D animation as their main communication media, and resorted to 3D virtual scenes as the main interaction object, enabling a user experience that delivers a strong sense of immersion. This paper approached the emerging Web3D applications that generate stronger impacts on people's lives through “real-time rendering technology”, which is the core technology of Web3D. This paper discusses all the major 3D graphics APIs of Web3D and the well-known Web3D engines at home and abroad and classify the real-time rendering frameworks of Web3D applications into different categories. Results Finally, this study analyzed the specific demand posed by different fields to Web3D applications by referring to the representative Web3D applications in each particular field. Conclusions Our survey results show that Web3D applications based on real-time rendering have in-depth sectors of society and even family, which is a trend that has influence on every line of industry.
基金the Key Research Program of the Chinese Academy of Sciences(ZDRE-KT-2021-3)。
文摘Volume visualization can not only illustrate overall distribution but also inner structure and it is an important approach for space environment research.Space environment simulation can produce several correlated variables at the same time.However,existing compressed volume rendering methods only consider reducing the redundant information in a single volume of a specific variable,not dealing with the redundant information among these variables.For space environment volume data with multi-correlated variables,based on the HVQ-1d method we propose a further improved HVQ method by compositing variable-specific levels to reduce the redundant information among these variables.The volume data associated with each variable is divided into disjoint blocks of size 43 initially.The blocks are represented as two levels,a mean level and a detail level.The variable-specific mean levels and detail levels are combined respectively to form a larger global mean level and a larger global detail level.To both global levels,a splitting based on a principal component analysis is applied to compute initial codebooks.Then,LBG algorithm is conducted for codebook refinement and quantization.We further take advantage of progressive rendering based on GPU for real-time interactive visualization.Our method has been tested along with HVQ and HVQ-1d on high-energy proton flux volume data,including>5,>10,>30 and>50 MeV integrated proton flux.The results of our experiments prove that the method proposed in this paper pays the least cost of quality at compression,achieves a higher decompression and rendering speed compared with HVQ and provides satisficed fidelity while ensuring interactive rendering speed.
文摘Point-based rendering is a common method widely used in point cloud rendering.It realizes rendering by turning the points into the base geometry.The critical step in point-based rendering is to set an appropriate rendering radius for the base geometry,usually calculated using the average Euclidean distance of the N nearest neighboring points to the rendered point.This method effectively reduces the appearance of empty spaces between points in rendering.However,it also causes the problem that the rendering radius of outlier points far away from the central region of the point cloud sequence could be large,which impacts the perceptual quality.To solve the above problem,we propose an algorithm for point-based point cloud rendering through outlier detection to optimize the perceptual quality of rendering.The algorithm determines whether the detected points are outliers using a combination of local and global geometric features.For the detected outliers,the minimum radius is used for rendering.We examine the performance of the proposed method in terms of both objective quality and perceptual quality.The experimental results show that the peak signal-to-noise ratio(PSNR)of the point cloud sequences is improved under all geometric quantization,and the PSNR improvement ratio is more evident in dense point clouds.Specifically,the PSNR of the point cloud sequences is improved by 3.6%on average compared with the original algorithm.The proposed method significantly improves the perceptual quality of the rendered point clouds and the results of ablation studies prove the feasibility and effectiveness of the proposed method.
基金Supported by the National Key R&D Program of China(2018YFB2100601)National Natural Science Foundation of China(61872024)。
文摘Background Mixed reality(MR)video fusion systems merge video imagery with 3D scenes to make the scene more realistic and help users understand the video content and temporal–spatial correlation between them,reducing the user′s cognitive load.MR video fusion are used in various applications;however,video fusion systems require powerful client machines because video streaming delivery,stitching,and rendering are computationally intensive.Moreover,huge bandwidth usage is another critical factor that affects the scalability of video-fusion systems.Methods Our framework proposes a fusion method for dynamically projecting video images into 3D models as textures.Results Several experiments on different metrics demonstrate the effectiveness of the proposed framework.Conclusions The framework proposed in this study can overcome client limitations by utilizing remote rendering.Furthermore,the framework we built is based on browsers.Therefore,the user can test the MR video fusion system with a laptop or tablet without installing any additional plug-ins or application programs.
基金Foundation of China(No.61902311)funding for this studysupported in part by the Natural Science Foundation of Shaanxi Province in China under Grants 2022JM-508,2022JM-317 and 2019JM-162.
文摘Computer-aided diagnosis based on image color rendering promotes medical image analysis and doctor-patient communication by highlighting important information of medical diagnosis.To overcome the limitations of the color rendering method based on deep learning,such as poor model stability,poor rendering quality,fuzzy boundaries and crossed color boundaries,we propose a novel hinge-cross-entropy generative adversarial network(HCEGAN).The self-attention mechanism was added and improved to focus on the important information of the image.And the hinge-cross-entropy loss function was used to stabilize the training process of GAN models.In this study,we implement the HCEGAN model for image color rendering based on DIV2K and COCO datasets,and evaluate the results using SSIM and PSNR.The experimental results show that the proposed HCEGAN automatically re-renders images,significantly improves the quality of color rendering and greatly improves the stability of prior GAN models.
文摘We demonstrated gold nanoclusters as color tunable emissive light converters for the application of white light emitting diodes (WLEDs). A blue LED providing 460 nm to excite gold nanoclusters mixed with UV curable material generates broad bandwidth emission at the visible range. Increasing the amount of gold nanoclusters, the correlated color temperature of WLEDs tuned from cold white to warm white, and also results in the variation of color rendering index (CRI). The highest CRI in the experiment is 92.
文摘Ray casting algorithm can obtain a better quality image in volume rendering, however, it exists some problems, such as powerful computing capacity and slow rendering speed. How to improve the re-sampled speed is a key to speed up the ray casting algorithm. An algorithm is introduced to reduce matrix computation by matrix transformation characteristics of re-sampling points in a two coordinate system. The projection of 3-D datasets on image plane is adopted to reduce the number of rays. Utilizing boundary box technique avoids the sampling in empty voxel. By extending the Bresenham algorithm to three dimensions, each re-sampling point is calculated. Experimental results show that a two to three-fold improvement in rendering speed using the optimized algorithm, and the similar image quality to traditional algorithm can be achieved. The optimized algorithm can produce the required quality images, thus reducing the total operations and speeding up the volume rendering.
基金the National Natural Science Foundation of China (Grant Nos.61204087, 61306099)the Guangdong Natural Science Foundation (Grant No. S2012040007003)+2 种基金China Postdoctoral Science Foundation (2013M531841)the Fundamental Research Funds for the Central Universities (2014ZM0003, 2014ZM0034, 2014ZM0037, 2014ZZ0028)the Specialized Research Fund for the Doctoral Program of Higher Education (Grant No. 20120172120008)
文摘A very-high color rendering index white organic light-emitting diode(WOLED) based on a simple structure was successfully fabricated. The optimized device exhibits a maximum total efficiency of 13.1 and 5.4 lm/W at 1,000 cd/m2. A peak color rendering index of 90 and a relatively stable color during a wide range of luminance were obtained. In addition, it was demonstrated that the 4,40,400-tri(9-carbazoyl) triphenylamine host influenced strongly the performance of this WOLED.These results may be beneficial to the design of both material and device architecture for high-performance WOLED.
基金supported the National Science Foundation of China(61773051,61761166011,51705016)Beijing Natural Science Foundation(4172048)the Fundamental Research Funds for the Central Universities(2017JBZ003)
文摘The visual fidelity of bleeding simulation in a surgical simulator is critical since it will affect not only the degree of visual realism,but also the user’s medical judgment and treatment in real-life settings.The conventional marching cubes surface rendering algorithm provides excellent visual effect in rendering gushing blood,however,it is insufficient for blood flow,which is very common in surgical procedures,since in this case the rendered surface and depth textures of blood are rough.In this paper,we propose a new method called the mixed depth rendering for rendering blood flow in surgical simulation.A smooth height field is created to minimize the height difference between neighboring particles on the bleeding surface.The color and transparency of each bleeding area are determined by the number of bleeding particles,which is consistent with the real visual effect.In addition,there is no much extra computational cost.The rendering of blood flow in a variety of surgical scenarios shows that visual feedback is much improved.The proposed mixed depth rendering method is also used in a neurosurgery simulator that we developed.
基金Project supported by the National Basic Research Program (973) of China (No. 2002CB312105), the National Natural Science Founda-tion of China (No. 60573074), the Natural Science Foundation of Shanxi Province, China (No. 20041040), Shanxi Foundation of Tackling Key Problem in Science and Technology (No. 051129), and Key NSFC Project of "Digital Olympic Museum" (No. 60533080), China
文摘Use of compressed mesh in parallel rendering architecture is still an unexplored area, the main challenge of which is to partition and sort the encoded mesh in compression-domain. This paper presents a mesh compression scheme PRMC (Parallel Rendering based Mesh Compression) supplying encoded meshes that can be partitioned and sorted in parallel rendering system even in encoded-domain. First, we segment the mesh into submeshes and clip the submeshes’ boundary into Runs, and then piecewise compress the submeshes and Runs respectively. With the help of several auxiliary index tables, compressed submeshes and Runs can serve as rendering primitives in parallel rendering system. Based on PRMC, we design and implement a parallel rendering architecture. Compared with uncompressed representation, experimental results showed that PRMC meshes applied in cluster parallel rendering system can dramatically reduce the communication requirement.
文摘森林的实时渲染及光照是视景系统中的一个难题.基于图像的渲染方法(IBR)由于渲染速度与模型复杂度无关,被广泛应用于场景重建.基于光流场(Light Field Rendering)的IBR技术,提出一种迭代投射算法来进行外形重建,实现了具有实时光影特征的森林效果.实验表明该算法结合了传统迭代、投射算法各自的优点,在质量和效率方面取得了平衡.
文摘Since the 1980s, various techniques have been used in the field of medicine for the post-processing of medical imaging data from computed tomography (CT) and magnetic resonance (MR). They include multiplanar reformations (MPR), maximum intensity projection (MIP) and Volume Rendering (VR). This paper presents the prototype of a new means of post-processing radiological examinations such as CT and MR, a technique that, for the first time, provides photorealistic visualizations of the human body. This new procedure was inspired by the quality of images achieved by animation software such as programs used in the entertainment industry, particularly to produce animated films. Thus, the name: Cinematic Rendering. It is already foreseeable that this new method of depiction will quickly be incorporated into the set of instruments employed in socalled virtual anatomy (teaching anatomy through the use of radiological depictions of the human body via X-ray, CT and MR in addition to the use of computer animation programs designed especially for human anatomy). Its potential for medical applications will have to be evaluated by future scientific investigations.
基金Supported by the National Natural Science Foundation of China(61363075)the National High Technology Research and Development Program of China(863 Program)(2012AA12A308)the Yue Qi Young Scholars Program of China University of Mining&Technology,Beijing(800015Z1117)
文摘Aiming to deal with the difficult issues of terrain data model simplification and crack disposal,the paper proposed an improved level of detail(LOD)terrain rendering algorithm,in which a variation coefficient of elevation is introduced to express the undulation of topography.Then the coefficient is used to construct a node evaluation function in the terrain data model simplification step.Furthermore,an edge reduction strategy is combined with the improved restrictive quadtree segmentation to handle the crack problem.The experiment results demonstrated that the proposed method can reduce the amount of rendering triangles and enhance the rendering speed on the premise of ensuring the rendering effect compared with a traditional LOD algorithm.
文摘Modern computer techniques have been in use for several years to generate three-dimensional visualizations of human anatomy. Very good 3-D computer models of the human body are now available and used routinely in anatomy instruction. These techniques are subsumed under the heading “virtual anatomy” to distinguish them from the conventional study of anatomy entailing cadavers and anatomy textbooks. Moreover, other imaging procedures (X-ray, angiography, CT and MR) are also used in virtual anatomy instruction. A recently introduced three-dimensional post-processing technique named Cinematic Rendering now makes it possible to use the output of routine CT and MR examinations as the basis for highly photo-realistic 3-D depictions of human anatomy. We have installed Cinematic Rendering (enabled for stereoscopy) in a high-definition 8K 3-D projection space that accommodates an audience of 150. The space’s projection surface measures 16 × 9 meters;images can be projected on both the front wall and the floor. A game controller can be used to operate Cinematic Rendering software so that it can generate interactive real-time depictions of human anatomy on the basis of CT and MR data sets. This prototype installation was implemented without technical problems;in day-to-day, real-world use over a period of 22 months, there were no impairments of service due to software crashes or other technical problems. We are already employing this installation routinely for educational offerings open to the public, courses for students in the health professions, and (continuing) professional education units for medical interns, residents and specialists—in, so to speak, the dissecting theater of the future.
基金supported by the Program for New Century Excellent Talents in University of Ministry of Education of China(Grant No.NCET-13-0927)the International Science&Technology Cooperation Program of China(Grant No.2012DFR50460)+1 种基金the National Natural Science Foundation of China(Grant Nos.21101111 and 61274056)the Shanxi Provincial Key Innovative Research Team in Science and Technology,China(Grant No.2012041011)
文摘Levofloxacin (LOFX), which is well-known as an antibiotic medicament, was shown to be useful as a 452-nm blue emitter for white organic light-emitting diodes (OLEDs). In this paper, the fabricated white OLED contains a 452-nm blue emitting layer (thickness of 30 nm) with 1 wt% LOFX doped in CBP (4,4'-bis(carbazol-9-yl)biphenyl) host and a 584-nm orange emitting layer (thickness of 10 nm) with 0.8 wt% DCJTB (4-(dicyanomethylene)-2-tert-butyl-6-(1,1,7,7- tetramethyljulolidin-4-yl-vinyl)-4H-pyran) doped in CBE which are separated by a 20-nm-thick buffer layer of TPBi (2,2',2"-(benzene-1,3,5-triyl)-tri(1-phenyl-lH-benzimidazole). A high color rendering index (CRI) of 84.5 and CIE chromaticity coordinates of (0.33, 0.32), which is close to ideal white emission CIE (0.333, 0.333), are obtained at a bias voltage of 14 V. Taking into account that LOFX is less expensive and the synthesis and purification technologies of LOFX are mature, these results indicate that blue fluorescence emitting LOFX is useful for applications to white OLEDs although the maximum current efficiency and luminance are not high. The present paper is expected to become a milestone to using medical drug materials for OLEDs.
文摘Laser scanning confocal endomicroscope(LSCEM)has emerged as an imaging modality which provides noninvasive,in vivo imaging of biological tissue on a microscopic scale.Scientific visualizations for LSCEM datasets captured by current imaging systems require these datasets to be fully acquired and brought to a separate rendering machine.To extend the features and capabilities of this modality,we propose a system which is capable of performing realtime visualization of LSCEM datasets.Using field-programmable gate arrays,our system performs three tasks in parallel:(1)automated control of dataset acquisition;(2)imaging-rendering system synchronization;and(3)realtime volume rendering of dynamic datasets.Through fusion of LSCEM imaging and volume rendering processes,acquired datasets can be visualized in realtime to provide an immediate perception of the image quality and biological conditions of the subject,further assisting in realtime cancer diagnosis.Subsequently,the imaging procedure can be improved for more accurate diagnosis and reduce the need for repeating the process due to unsatisfactory datasets.