Binaural rendering is of great interest to virtual reality and immersive media. Although humans can naturally use their two ears to perceive the spatial information contained in sounds, it is a challenging task for ma...Binaural rendering is of great interest to virtual reality and immersive media. Although humans can naturally use their two ears to perceive the spatial information contained in sounds, it is a challenging task for machines to achieve binaural rendering since the description of a sound field often requires multiple channels and even the metadata of the sound sources. In addition, the perceived sound varies from person to person even in the same sound field. Previous methods generally rely on individual-dependent head-related transferred function(HRTF)datasets and optimization algorithms that act on HRTFs. In practical applications, there are two major drawbacks to existing methods. The first is a high personalization cost, as traditional methods achieve personalized needs by measuring HRTFs. The second is insufficient accuracy because the optimization goal of traditional methods is to retain another part of information that is more important in perception at the cost of discarding a part of the information. Therefore, it is desirable to develop novel techniques to achieve personalization and accuracy at a low cost. To this end, we focus on the binaural rendering of ambisonic and propose 1) channel-shared encoder and channel-compared attention integrated into neural networks and 2) a loss function quantifying interaural level differences to deal with spatial information. To verify the proposed method, we collect and release the first paired ambisonic-binaural dataset and introduce three metrics to evaluate the content information and spatial information accuracy of the end-to-end methods. Extensive experimental results on the collected dataset demonstrate the superior performance of the proposed method and the shortcomings of previous methods.展开更多
With the development of virtual reality (VR) technology, more and more industries are beginning to integrate with VR technology. In response to the problem of not being able to directly render the lighting effect of C...With the development of virtual reality (VR) technology, more and more industries are beginning to integrate with VR technology. In response to the problem of not being able to directly render the lighting effect of Caideng in digital Caideng scenes, this article analyzes the lighting model. It combines it with the lighting effect of Caideng scenes to design an optimized lighting model algorithm that fuses the bidirectional transmission distribution function (BTDF) model. This algorithm can efficiently render the lighting effect of Caideng models in a virtual environment. And using image optimization processing methods, the immersive experience effect on the VR is enhanced. Finally, a Caideng roaming interactive system was designed based on this method. The results show that the frame rate of the system is stable during operation, maintained above 60 fps, and has a good immersive experience.展开更多
Background In recent years, with the rapid development of mobile Internet and Web3D technologies, a large number of web-based online 3D visualization applications have emerged. Web3D applications, including Web3D onli...Background In recent years, with the rapid development of mobile Internet and Web3D technologies, a large number of web-based online 3D visualization applications have emerged. Web3D applications, including Web3D online tourism, Web3D online architecture, Web3D online education environment, Web3D online medical care, and Web3D online shopping are examples of these applications that leverage 3D rendering on the web. These applications have pushed the boundaries of traditional web applications that use text, sound, image, video, and 2D animation as their main communication media, and resorted to 3D virtual scenes as the main interaction object, enabling a user experience that delivers a strong sense of immersion. This paper approached the emerging Web3D applications that generate stronger impacts on people's lives through “real-time rendering technology”, which is the core technology of Web3D. This paper discusses all the major 3D graphics APIs of Web3D and the well-known Web3D engines at home and abroad and classify the real-time rendering frameworks of Web3D applications into different categories. Results Finally, this study analyzed the specific demand posed by different fields to Web3D applications by referring to the representative Web3D applications in each particular field. Conclusions Our survey results show that Web3D applications based on real-time rendering have in-depth sectors of society and even family, which is a trend that has influence on every line of industry.展开更多
Point-based rendering is a common method widely used in point cloud rendering.It realizes rendering by turning the points into the base geometry.The critical step in point-based rendering is to set an appropriate rend...Point-based rendering is a common method widely used in point cloud rendering.It realizes rendering by turning the points into the base geometry.The critical step in point-based rendering is to set an appropriate rendering radius for the base geometry,usually calculated using the average Euclidean distance of the N nearest neighboring points to the rendered point.This method effectively reduces the appearance of empty spaces between points in rendering.However,it also causes the problem that the rendering radius of outlier points far away from the central region of the point cloud sequence could be large,which impacts the perceptual quality.To solve the above problem,we propose an algorithm for point-based point cloud rendering through outlier detection to optimize the perceptual quality of rendering.The algorithm determines whether the detected points are outliers using a combination of local and global geometric features.For the detected outliers,the minimum radius is used for rendering.We examine the performance of the proposed method in terms of both objective quality and perceptual quality.The experimental results show that the peak signal-to-noise ratio(PSNR)of the point cloud sequences is improved under all geometric quantization,and the PSNR improvement ratio is more evident in dense point clouds.Specifically,the PSNR of the point cloud sequences is improved by 3.6%on average compared with the original algorithm.The proposed method significantly improves the perceptual quality of the rendered point clouds and the results of ablation studies prove the feasibility and effectiveness of the proposed method.展开更多
Computer-aided diagnosis based on image color rendering promotes medical image analysis and doctor-patient communication by highlighting important information of medical diagnosis.To overcome the limitations of the co...Computer-aided diagnosis based on image color rendering promotes medical image analysis and doctor-patient communication by highlighting important information of medical diagnosis.To overcome the limitations of the color rendering method based on deep learning,such as poor model stability,poor rendering quality,fuzzy boundaries and crossed color boundaries,we propose a novel hinge-cross-entropy generative adversarial network(HCEGAN).The self-attention mechanism was added and improved to focus on the important information of the image.And the hinge-cross-entropy loss function was used to stabilize the training process of GAN models.In this study,we implement the HCEGAN model for image color rendering based on DIV2K and COCO datasets,and evaluate the results using SSIM and PSNR.The experimental results show that the proposed HCEGAN automatically re-renders images,significantly improves the quality of color rendering and greatly improves the stability of prior GAN models.展开更多
Volume visualization can not only illustrate overall distribution but also inner structure and it is an important approach for space environment research.Space environment simulation can produce several correlated var...Volume visualization can not only illustrate overall distribution but also inner structure and it is an important approach for space environment research.Space environment simulation can produce several correlated variables at the same time.However,existing compressed volume rendering methods only consider reducing the redundant information in a single volume of a specific variable,not dealing with the redundant information among these variables.For space environment volume data with multi-correlated variables,based on the HVQ-1d method we propose a further improved HVQ method by compositing variable-specific levels to reduce the redundant information among these variables.The volume data associated with each variable is divided into disjoint blocks of size 43 initially.The blocks are represented as two levels,a mean level and a detail level.The variable-specific mean levels and detail levels are combined respectively to form a larger global mean level and a larger global detail level.To both global levels,a splitting based on a principal component analysis is applied to compute initial codebooks.Then,LBG algorithm is conducted for codebook refinement and quantization.We further take advantage of progressive rendering based on GPU for real-time interactive visualization.Our method has been tested along with HVQ and HVQ-1d on high-energy proton flux volume data,including>5,>10,>30 and>50 MeV integrated proton flux.The results of our experiments prove that the method proposed in this paper pays the least cost of quality at compression,achieves a higher decompression and rendering speed compared with HVQ and provides satisficed fidelity while ensuring interactive rendering speed.展开更多
Background Mixed reality(MR)video fusion systems merge video imagery with 3D scenes to make the scene more realistic and help users understand the video content and temporal–spatial correlation between them,reducing ...Background Mixed reality(MR)video fusion systems merge video imagery with 3D scenes to make the scene more realistic and help users understand the video content and temporal–spatial correlation between them,reducing the user′s cognitive load.MR video fusion are used in various applications;however,video fusion systems require powerful client machines because video streaming delivery,stitching,and rendering are computationally intensive.Moreover,huge bandwidth usage is another critical factor that affects the scalability of video-fusion systems.Methods Our framework proposes a fusion method for dynamically projecting video images into 3D models as textures.Results Several experiments on different metrics demonstrate the effectiveness of the proposed framework.Conclusions The framework proposed in this study can overcome client limitations by utilizing remote rendering.Furthermore,the framework we built is based on browsers.Therefore,the user can test the MR video fusion system with a laptop or tablet without installing any additional plug-ins or application programs.展开更多
We demonstrated gold nanoclusters as color tunable emissive light converters for the application of white light emitting diodes (WLEDs). A blue LED providing 460 nm to excite gold nanoclusters mixed with UV curable ma...We demonstrated gold nanoclusters as color tunable emissive light converters for the application of white light emitting diodes (WLEDs). A blue LED providing 460 nm to excite gold nanoclusters mixed with UV curable material generates broad bandwidth emission at the visible range. Increasing the amount of gold nanoclusters, the correlated color temperature of WLEDs tuned from cold white to warm white, and also results in the variation of color rendering index (CRI). The highest CRI in the experiment is 92.展开更多
Painting is done according to the artist’s style.The most representative of the style is the texture and shape of the brush stroke.Computer simulations allow the artist’s painting to be produced by taking this strok...Painting is done according to the artist’s style.The most representative of the style is the texture and shape of the brush stroke.Computer simulations allow the artist’s painting to be produced by taking this stroke and pasting it onto the image.This is called stroke-based rendering.The quality of the result depends on the number or quality of this stroke,since the stroke is taken to create the image.It is not easy to render using a large amount of information,as there is a limit to having a stroke scanned.In this work,we intend to produce rendering results using mass data that produces large amounts of strokes by expanding existing strokes through warping.Through this,we have produced results that have higher quality than conventional studies.Finally,we also compare the correlation between the amount of data and the results.展开更多
Use of compressed mesh in parallel rendering architecture is still an unexplored area, the main challenge of which is to partition and sort the encoded mesh in compression-domain. This paper presents a mesh compressio...Use of compressed mesh in parallel rendering architecture is still an unexplored area, the main challenge of which is to partition and sort the encoded mesh in compression-domain. This paper presents a mesh compression scheme PRMC (Parallel Rendering based Mesh Compression) supplying encoded meshes that can be partitioned and sorted in parallel rendering system even in encoded-domain. First, we segment the mesh into submeshes and clip the submeshes’ boundary into Runs, and then piecewise compress the submeshes and Runs respectively. With the help of several auxiliary index tables, compressed submeshes and Runs can serve as rendering primitives in parallel rendering system. Based on PRMC, we design and implement a parallel rendering architecture. Compared with uncompressed representation, experimental results showed that PRMC meshes applied in cluster parallel rendering system can dramatically reduce the communication requirement.展开更多
Since the 1980s, various techniques have been used in the field of medicine for the post-processing of medical imaging data from computed tomography (CT) and magnetic resonance (MR). They include multiplanar reformati...Since the 1980s, various techniques have been used in the field of medicine for the post-processing of medical imaging data from computed tomography (CT) and magnetic resonance (MR). They include multiplanar reformations (MPR), maximum intensity projection (MIP) and Volume Rendering (VR). This paper presents the prototype of a new means of post-processing radiological examinations such as CT and MR, a technique that, for the first time, provides photorealistic visualizations of the human body. This new procedure was inspired by the quality of images achieved by animation software such as programs used in the entertainment industry, particularly to produce animated films. Thus, the name: Cinematic Rendering. It is already foreseeable that this new method of depiction will quickly be incorporated into the set of instruments employed in socalled virtual anatomy (teaching anatomy through the use of radiological depictions of the human body via X-ray, CT and MR in addition to the use of computer animation programs designed especially for human anatomy). Its potential for medical applications will have to be evaluated by future scientific investigations.展开更多
Ray casting algorithm can obtain a better quality image in volume rendering, however, it exists some problems, such as powerful computing capacity and slow rendering speed. How to improve the re-sampled speed is a key...Ray casting algorithm can obtain a better quality image in volume rendering, however, it exists some problems, such as powerful computing capacity and slow rendering speed. How to improve the re-sampled speed is a key to speed up the ray casting algorithm. An algorithm is introduced to reduce matrix computation by matrix transformation characteristics of re-sampling points in a two coordinate system. The projection of 3-D datasets on image plane is adopted to reduce the number of rays. Utilizing boundary box technique avoids the sampling in empty voxel. By extending the Bresenham algorithm to three dimensions, each re-sampling point is calculated. Experimental results show that a two to three-fold improvement in rendering speed using the optimized algorithm, and the similar image quality to traditional algorithm can be achieved. The optimized algorithm can produce the required quality images, thus reducing the total operations and speeding up the volume rendering.展开更多
Background Realistic rendering has been an important g oal of several interactive applications,which requires an efficient virtual simulation of many special effects that are common in the real world.However,refractio...Background Realistic rendering has been an important g oal of several interactive applications,which requires an efficient virtual simulation of many special effects that are common in the real world.However,refraction is often ignored in these applications.Rendering the refraction effect is extremely complicated and time-consuming.Methods In this study,a simple,efficient,and fast rendering technique of water refraction effects is proposed.This technique comprises a broad and narrow phase.In the broad phase,the water surface is considered flat.The vertices of underwater meshes are transformed based on Snell's Law.In the narrow phase,the effects of waves on the water surface are examined.Every pixel on the water surface mesh is collected by a screen-space method with an extra rendering pass.The broad phase redirects most pixels that need to be recalculated in the narrow phase to the pixels in the rendering buffer.Results We analyzed the performances of three different conventional methods and ours in rendering refraction effects for the same scenes.The proposed method obtains higher frame rate and physical accuracy comparing with other methods.It is used in several game scenes,and realistic water refraction effects can be generated efficiently.Conclusions The two-phase water refraction method produces a tradeoff between efficiency and quality.It is easy to implement in modern game engines,and thus improve the quality of rendering scenes in video games or other real-ti me applications.展开更多
A very-high color rendering index white organic light-emitting diode(WOLED) based on a simple structure was successfully fabricated. The optimized device exhibits a maximum total efficiency of 13.1 and 5.4 lm/W at 1,0...A very-high color rendering index white organic light-emitting diode(WOLED) based on a simple structure was successfully fabricated. The optimized device exhibits a maximum total efficiency of 13.1 and 5.4 lm/W at 1,000 cd/m2. A peak color rendering index of 90 and a relatively stable color during a wide range of luminance were obtained. In addition, it was demonstrated that the 4,40,400-tri(9-carbazoyl) triphenylamine host influenced strongly the performance of this WOLED.These results may be beneficial to the design of both material and device architecture for high-performance WOLED.展开更多
Laser scanning confocal endomicroscope(LSCEM)has emerged as an imaging modality which provides noninvasive,in vivo imaging of biological tissue on a microscopic scale.Scientific visualizations for LSCEM datasets captu...Laser scanning confocal endomicroscope(LSCEM)has emerged as an imaging modality which provides noninvasive,in vivo imaging of biological tissue on a microscopic scale.Scientific visualizations for LSCEM datasets captured by current imaging systems require these datasets to be fully acquired and brought to a separate rendering machine.To extend the features and capabilities of this modality,we propose a system which is capable of performing realtime visualization of LSCEM datasets.Using field-programmable gate arrays,our system performs three tasks in parallel:(1)automated control of dataset acquisition;(2)imaging-rendering system synchronization;and(3)realtime volume rendering of dynamic datasets.Through fusion of LSCEM imaging and volume rendering processes,acquired datasets can be visualized in realtime to provide an immediate perception of the image quality and biological conditions of the subject,further assisting in realtime cancer diagnosis.Subsequently,the imaging procedure can be improved for more accurate diagnosis and reduce the need for repeating the process due to unsatisfactory datasets.展开更多
Aiming to deal with the difficult issues of terrain data model simplification and crack disposal,the paper proposed an improved level of detail(LOD)terrain rendering algorithm,in which a variation coefficient of eleva...Aiming to deal with the difficult issues of terrain data model simplification and crack disposal,the paper proposed an improved level of detail(LOD)terrain rendering algorithm,in which a variation coefficient of elevation is introduced to express the undulation of topography.Then the coefficient is used to construct a node evaluation function in the terrain data model simplification step.Furthermore,an edge reduction strategy is combined with the improved restrictive quadtree segmentation to handle the crack problem.The experiment results demonstrated that the proposed method can reduce the amount of rendering triangles and enhance the rendering speed on the premise of ensuring the rendering effect compared with a traditional LOD algorithm.展开更多
The visual fidelity of bleeding simulation in a surgical simulator is critical since it will affect not only the degree of visual realism,but also the user’s medical judgment and treatment in real-life settings.The c...The visual fidelity of bleeding simulation in a surgical simulator is critical since it will affect not only the degree of visual realism,but also the user’s medical judgment and treatment in real-life settings.The conventional marching cubes surface rendering algorithm provides excellent visual effect in rendering gushing blood,however,it is insufficient for blood flow,which is very common in surgical procedures,since in this case the rendered surface and depth textures of blood are rough.In this paper,we propose a new method called the mixed depth rendering for rendering blood flow in surgical simulation.A smooth height field is created to minimize the height difference between neighboring particles on the bleeding surface.The color and transparency of each bleeding area are determined by the number of bleeding particles,which is consistent with the real visual effect.In addition,there is no much extra computational cost.The rendering of blood flow in a variety of surgical scenarios shows that visual feedback is much improved.The proposed mixed depth rendering method is also used in a neurosurgery simulator that we developed.展开更多
Levofloxacin (LOFX), which is well-known as an antibiotic medicament, was shown to be useful as a 452-nm blue emitter for white organic light-emitting diodes (OLEDs). In this paper, the fabricated white OLED conta...Levofloxacin (LOFX), which is well-known as an antibiotic medicament, was shown to be useful as a 452-nm blue emitter for white organic light-emitting diodes (OLEDs). In this paper, the fabricated white OLED contains a 452-nm blue emitting layer (thickness of 30 nm) with 1 wt% LOFX doped in CBP (4,4'-bis(carbazol-9-yl)biphenyl) host and a 584-nm orange emitting layer (thickness of 10 nm) with 0.8 wt% DCJTB (4-(dicyanomethylene)-2-tert-butyl-6-(1,1,7,7- tetramethyljulolidin-4-yl-vinyl)-4H-pyran) doped in CBE which are separated by a 20-nm-thick buffer layer of TPBi (2,2',2"-(benzene-1,3,5-triyl)-tri(1-phenyl-lH-benzimidazole). A high color rendering index (CRI) of 84.5 and CIE chromaticity coordinates of (0.33, 0.32), which is close to ideal white emission CIE (0.333, 0.333), are obtained at a bias voltage of 14 V. Taking into account that LOFX is less expensive and the synthesis and purification technologies of LOFX are mature, these results indicate that blue fluorescence emitting LOFX is useful for applications to white OLEDs although the maximum current efficiency and luminance are not high. The present paper is expected to become a milestone to using medical drug materials for OLEDs.展开更多
AIM:To compare the damage of light-emitting diodes(LEDs)with different color rendering indexes(CRIs)to the ocular surface and retina of rats.METHODS:Totally 20 Sprague-Dawley(SD)rats were randomly divided into four gr...AIM:To compare the damage of light-emitting diodes(LEDs)with different color rendering indexes(CRIs)to the ocular surface and retina of rats.METHODS:Totally 20 Sprague-Dawley(SD)rats were randomly divided into four groups:the first group was normal control group without any intervention,other three groups were exposed by LEDs with low(LED-L),medium(LED-M),and high(LED-H)CRI respectively for 12 h a day,continuously for 4 wk.The changes in tear secretion(Schirmer I test,SIt),tear film break-up time(BUT),and corneal fluorescein sodium staining(CFS)scores were compared at different times(1 d before experiment,2 and 4 wk after the experiment).The histopathological changes of rat lacrimal gland and retina were observed at 4 wk,and the expressions of tumor necrosis factor-α(TNF-α)and interleukin-6(IL-6)in lacrimal gland were detected by immunofluorescence method.RESULTS:With the increase of light exposed time,the CFS value of each light exposed group continued to increase,and the BUT and SIt scores continued to decrease,which were different from the control group,and the differences between the light exposed groups were statistically significant.Hematoxylin-eosin(HE)results showed that the lacrimal glands of each exposed group were seen varying degrees of acinar atrophy,vacuoledistribution,increasing of eosinophil granules,etc.;the retina showed obvious reduction of photoreceptor cell layer and changes in retinal thickness;LED-L group has the most significant change in all tests.Immunofluorescence suggested that the positive expressions of TNF-αand IL-6 in the lacrimal glands of each exposed group were higher than those of the control group.CONCLUSION:LED exposure for 4 wk can cause the pathological changes of lacrimal gland and retina of rats,and increase the expression of TNF-αand IL-6 in lacrimal gland,the degree of damage is negatively correlated with the CRI.展开更多
To improve the sense of reality on perception, an improved algorithm of 3D shape haptic rendering is put forward based on a finger mounted vibrotactile device. The principle is that the interactive information and the...To improve the sense of reality on perception, an improved algorithm of 3D shape haptic rendering is put forward based on a finger mounted vibrotactile device. The principle is that the interactive information and the shape information are conveyed to users when they touch virtual objects at mobile terminals by attaching the vibrotactile feedback on a fingertip. The extraction of shape characteristics, the interactive information and the mapping of shape in formation of vibration stimulation are key parts of the proposed algorithm to realize the real tactile rendering. The contact status of the interaction process, the height information and local gradient of the touch point are regarded as shape information and used to control the vibration intension, rhythm and distribution of the vibrators. With different contact status and shape information, the vibration pattern can be adjusted in time to imitate the outlines of virtual objects. Finally, the effectiveness of the algorithm is verified by shape perception experiments. The results show that the improved algorithm is effective for 3D shape haptic rendering.展开更多
基金supported in part by the National Natural Science Foundation of China (62176059, 62101136)。
文摘Binaural rendering is of great interest to virtual reality and immersive media. Although humans can naturally use their two ears to perceive the spatial information contained in sounds, it is a challenging task for machines to achieve binaural rendering since the description of a sound field often requires multiple channels and even the metadata of the sound sources. In addition, the perceived sound varies from person to person even in the same sound field. Previous methods generally rely on individual-dependent head-related transferred function(HRTF)datasets and optimization algorithms that act on HRTFs. In practical applications, there are two major drawbacks to existing methods. The first is a high personalization cost, as traditional methods achieve personalized needs by measuring HRTFs. The second is insufficient accuracy because the optimization goal of traditional methods is to retain another part of information that is more important in perception at the cost of discarding a part of the information. Therefore, it is desirable to develop novel techniques to achieve personalization and accuracy at a low cost. To this end, we focus on the binaural rendering of ambisonic and propose 1) channel-shared encoder and channel-compared attention integrated into neural networks and 2) a loss function quantifying interaural level differences to deal with spatial information. To verify the proposed method, we collect and release the first paired ambisonic-binaural dataset and introduce three metrics to evaluate the content information and spatial information accuracy of the end-to-end methods. Extensive experimental results on the collected dataset demonstrate the superior performance of the proposed method and the shortcomings of previous methods.
文摘With the development of virtual reality (VR) technology, more and more industries are beginning to integrate with VR technology. In response to the problem of not being able to directly render the lighting effect of Caideng in digital Caideng scenes, this article analyzes the lighting model. It combines it with the lighting effect of Caideng scenes to design an optimized lighting model algorithm that fuses the bidirectional transmission distribution function (BTDF) model. This algorithm can efficiently render the lighting effect of Caideng models in a virtual environment. And using image optimization processing methods, the immersive experience effect on the VR is enhanced. Finally, a Caideng roaming interactive system was designed based on this method. The results show that the frame rate of the system is stable during operation, maintained above 60 fps, and has a good immersive experience.
基金the Science and Technology Program of Educational Commission of Jiangxi Province,China(DA202104172)the Innovation and Entrepreneurship Course Program of Nanchang Hangkong University(KCPY1910)the Teaching Reform Research Program of Nanchang Hangkong University(JY21040).
文摘Background In recent years, with the rapid development of mobile Internet and Web3D technologies, a large number of web-based online 3D visualization applications have emerged. Web3D applications, including Web3D online tourism, Web3D online architecture, Web3D online education environment, Web3D online medical care, and Web3D online shopping are examples of these applications that leverage 3D rendering on the web. These applications have pushed the boundaries of traditional web applications that use text, sound, image, video, and 2D animation as their main communication media, and resorted to 3D virtual scenes as the main interaction object, enabling a user experience that delivers a strong sense of immersion. This paper approached the emerging Web3D applications that generate stronger impacts on people's lives through “real-time rendering technology”, which is the core technology of Web3D. This paper discusses all the major 3D graphics APIs of Web3D and the well-known Web3D engines at home and abroad and classify the real-time rendering frameworks of Web3D applications into different categories. Results Finally, this study analyzed the specific demand posed by different fields to Web3D applications by referring to the representative Web3D applications in each particular field. Conclusions Our survey results show that Web3D applications based on real-time rendering have in-depth sectors of society and even family, which is a trend that has influence on every line of industry.
文摘Point-based rendering is a common method widely used in point cloud rendering.It realizes rendering by turning the points into the base geometry.The critical step in point-based rendering is to set an appropriate rendering radius for the base geometry,usually calculated using the average Euclidean distance of the N nearest neighboring points to the rendered point.This method effectively reduces the appearance of empty spaces between points in rendering.However,it also causes the problem that the rendering radius of outlier points far away from the central region of the point cloud sequence could be large,which impacts the perceptual quality.To solve the above problem,we propose an algorithm for point-based point cloud rendering through outlier detection to optimize the perceptual quality of rendering.The algorithm determines whether the detected points are outliers using a combination of local and global geometric features.For the detected outliers,the minimum radius is used for rendering.We examine the performance of the proposed method in terms of both objective quality and perceptual quality.The experimental results show that the peak signal-to-noise ratio(PSNR)of the point cloud sequences is improved under all geometric quantization,and the PSNR improvement ratio is more evident in dense point clouds.Specifically,the PSNR of the point cloud sequences is improved by 3.6%on average compared with the original algorithm.The proposed method significantly improves the perceptual quality of the rendered point clouds and the results of ablation studies prove the feasibility and effectiveness of the proposed method.
基金Foundation of China(No.61902311)funding for this studysupported in part by the Natural Science Foundation of Shaanxi Province in China under Grants 2022JM-508,2022JM-317 and 2019JM-162.
文摘Computer-aided diagnosis based on image color rendering promotes medical image analysis and doctor-patient communication by highlighting important information of medical diagnosis.To overcome the limitations of the color rendering method based on deep learning,such as poor model stability,poor rendering quality,fuzzy boundaries and crossed color boundaries,we propose a novel hinge-cross-entropy generative adversarial network(HCEGAN).The self-attention mechanism was added and improved to focus on the important information of the image.And the hinge-cross-entropy loss function was used to stabilize the training process of GAN models.In this study,we implement the HCEGAN model for image color rendering based on DIV2K and COCO datasets,and evaluate the results using SSIM and PSNR.The experimental results show that the proposed HCEGAN automatically re-renders images,significantly improves the quality of color rendering and greatly improves the stability of prior GAN models.
基金the Key Research Program of the Chinese Academy of Sciences(ZDRE-KT-2021-3)。
文摘Volume visualization can not only illustrate overall distribution but also inner structure and it is an important approach for space environment research.Space environment simulation can produce several correlated variables at the same time.However,existing compressed volume rendering methods only consider reducing the redundant information in a single volume of a specific variable,not dealing with the redundant information among these variables.For space environment volume data with multi-correlated variables,based on the HVQ-1d method we propose a further improved HVQ method by compositing variable-specific levels to reduce the redundant information among these variables.The volume data associated with each variable is divided into disjoint blocks of size 43 initially.The blocks are represented as two levels,a mean level and a detail level.The variable-specific mean levels and detail levels are combined respectively to form a larger global mean level and a larger global detail level.To both global levels,a splitting based on a principal component analysis is applied to compute initial codebooks.Then,LBG algorithm is conducted for codebook refinement and quantization.We further take advantage of progressive rendering based on GPU for real-time interactive visualization.Our method has been tested along with HVQ and HVQ-1d on high-energy proton flux volume data,including>5,>10,>30 and>50 MeV integrated proton flux.The results of our experiments prove that the method proposed in this paper pays the least cost of quality at compression,achieves a higher decompression and rendering speed compared with HVQ and provides satisficed fidelity while ensuring interactive rendering speed.
基金Supported by the National Key R&D Program of China(2018YFB2100601)National Natural Science Foundation of China(61872024)。
文摘Background Mixed reality(MR)video fusion systems merge video imagery with 3D scenes to make the scene more realistic and help users understand the video content and temporal–spatial correlation between them,reducing the user′s cognitive load.MR video fusion are used in various applications;however,video fusion systems require powerful client machines because video streaming delivery,stitching,and rendering are computationally intensive.Moreover,huge bandwidth usage is another critical factor that affects the scalability of video-fusion systems.Methods Our framework proposes a fusion method for dynamically projecting video images into 3D models as textures.Results Several experiments on different metrics demonstrate the effectiveness of the proposed framework.Conclusions The framework proposed in this study can overcome client limitations by utilizing remote rendering.Furthermore,the framework we built is based on browsers.Therefore,the user can test the MR video fusion system with a laptop or tablet without installing any additional plug-ins or application programs.
文摘We demonstrated gold nanoclusters as color tunable emissive light converters for the application of white light emitting diodes (WLEDs). A blue LED providing 460 nm to excite gold nanoclusters mixed with UV curable material generates broad bandwidth emission at the visible range. Increasing the amount of gold nanoclusters, the correlated color temperature of WLEDs tuned from cold white to warm white, and also results in the variation of color rendering index (CRI). The highest CRI in the experiment is 92.
基金This research was supported by the Chung-Ang University Research Scholarship Grants in 2017.
文摘Painting is done according to the artist’s style.The most representative of the style is the texture and shape of the brush stroke.Computer simulations allow the artist’s painting to be produced by taking this stroke and pasting it onto the image.This is called stroke-based rendering.The quality of the result depends on the number or quality of this stroke,since the stroke is taken to create the image.It is not easy to render using a large amount of information,as there is a limit to having a stroke scanned.In this work,we intend to produce rendering results using mass data that produces large amounts of strokes by expanding existing strokes through warping.Through this,we have produced results that have higher quality than conventional studies.Finally,we also compare the correlation between the amount of data and the results.
基金Project supported by the National Basic Research Program (973) of China (No. 2002CB312105), the National Natural Science Founda-tion of China (No. 60573074), the Natural Science Foundation of Shanxi Province, China (No. 20041040), Shanxi Foundation of Tackling Key Problem in Science and Technology (No. 051129), and Key NSFC Project of "Digital Olympic Museum" (No. 60533080), China
文摘Use of compressed mesh in parallel rendering architecture is still an unexplored area, the main challenge of which is to partition and sort the encoded mesh in compression-domain. This paper presents a mesh compression scheme PRMC (Parallel Rendering based Mesh Compression) supplying encoded meshes that can be partitioned and sorted in parallel rendering system even in encoded-domain. First, we segment the mesh into submeshes and clip the submeshes’ boundary into Runs, and then piecewise compress the submeshes and Runs respectively. With the help of several auxiliary index tables, compressed submeshes and Runs can serve as rendering primitives in parallel rendering system. Based on PRMC, we design and implement a parallel rendering architecture. Compared with uncompressed representation, experimental results showed that PRMC meshes applied in cluster parallel rendering system can dramatically reduce the communication requirement.
文摘Since the 1980s, various techniques have been used in the field of medicine for the post-processing of medical imaging data from computed tomography (CT) and magnetic resonance (MR). They include multiplanar reformations (MPR), maximum intensity projection (MIP) and Volume Rendering (VR). This paper presents the prototype of a new means of post-processing radiological examinations such as CT and MR, a technique that, for the first time, provides photorealistic visualizations of the human body. This new procedure was inspired by the quality of images achieved by animation software such as programs used in the entertainment industry, particularly to produce animated films. Thus, the name: Cinematic Rendering. It is already foreseeable that this new method of depiction will quickly be incorporated into the set of instruments employed in socalled virtual anatomy (teaching anatomy through the use of radiological depictions of the human body via X-ray, CT and MR in addition to the use of computer animation programs designed especially for human anatomy). Its potential for medical applications will have to be evaluated by future scientific investigations.
文摘Ray casting algorithm can obtain a better quality image in volume rendering, however, it exists some problems, such as powerful computing capacity and slow rendering speed. How to improve the re-sampled speed is a key to speed up the ray casting algorithm. An algorithm is introduced to reduce matrix computation by matrix transformation characteristics of re-sampling points in a two coordinate system. The projection of 3-D datasets on image plane is adopted to reduce the number of rays. Utilizing boundary box technique avoids the sampling in empty voxel. By extending the Bresenham algorithm to three dimensions, each re-sampling point is calculated. Experimental results show that a two to three-fold improvement in rendering speed using the optimized algorithm, and the similar image quality to traditional algorithm can be achieved. The optimized algorithm can produce the required quality images, thus reducing the total operations and speeding up the volume rendering.
基金the Fundamental Research Funds for the Central Universities,the National Key R&D Program of China(2018 YFB 1403900)the High-quality and Cutting-edge Disciplines Construction Project for Universities in Beijing(Internet Information,Communication University of China).
文摘Background Realistic rendering has been an important g oal of several interactive applications,which requires an efficient virtual simulation of many special effects that are common in the real world.However,refraction is often ignored in these applications.Rendering the refraction effect is extremely complicated and time-consuming.Methods In this study,a simple,efficient,and fast rendering technique of water refraction effects is proposed.This technique comprises a broad and narrow phase.In the broad phase,the water surface is considered flat.The vertices of underwater meshes are transformed based on Snell's Law.In the narrow phase,the effects of waves on the water surface are examined.Every pixel on the water surface mesh is collected by a screen-space method with an extra rendering pass.The broad phase redirects most pixels that need to be recalculated in the narrow phase to the pixels in the rendering buffer.Results We analyzed the performances of three different conventional methods and ours in rendering refraction effects for the same scenes.The proposed method obtains higher frame rate and physical accuracy comparing with other methods.It is used in several game scenes,and realistic water refraction effects can be generated efficiently.Conclusions The two-phase water refraction method produces a tradeoff between efficiency and quality.It is easy to implement in modern game engines,and thus improve the quality of rendering scenes in video games or other real-ti me applications.
基金the National Natural Science Foundation of China (Grant Nos.61204087, 61306099)the Guangdong Natural Science Foundation (Grant No. S2012040007003)+2 种基金China Postdoctoral Science Foundation (2013M531841)the Fundamental Research Funds for the Central Universities (2014ZM0003, 2014ZM0034, 2014ZM0037, 2014ZZ0028)the Specialized Research Fund for the Doctoral Program of Higher Education (Grant No. 20120172120008)
文摘A very-high color rendering index white organic light-emitting diode(WOLED) based on a simple structure was successfully fabricated. The optimized device exhibits a maximum total efficiency of 13.1 and 5.4 lm/W at 1,000 cd/m2. A peak color rendering index of 90 and a relatively stable color during a wide range of luminance were obtained. In addition, it was demonstrated that the 4,40,400-tri(9-carbazoyl) triphenylamine host influenced strongly the performance of this WOLED.These results may be beneficial to the design of both material and device architecture for high-performance WOLED.
文摘Laser scanning confocal endomicroscope(LSCEM)has emerged as an imaging modality which provides noninvasive,in vivo imaging of biological tissue on a microscopic scale.Scientific visualizations for LSCEM datasets captured by current imaging systems require these datasets to be fully acquired and brought to a separate rendering machine.To extend the features and capabilities of this modality,we propose a system which is capable of performing realtime visualization of LSCEM datasets.Using field-programmable gate arrays,our system performs three tasks in parallel:(1)automated control of dataset acquisition;(2)imaging-rendering system synchronization;and(3)realtime volume rendering of dynamic datasets.Through fusion of LSCEM imaging and volume rendering processes,acquired datasets can be visualized in realtime to provide an immediate perception of the image quality and biological conditions of the subject,further assisting in realtime cancer diagnosis.Subsequently,the imaging procedure can be improved for more accurate diagnosis and reduce the need for repeating the process due to unsatisfactory datasets.
基金Supported by the National Natural Science Foundation of China(61363075)the National High Technology Research and Development Program of China(863 Program)(2012AA12A308)the Yue Qi Young Scholars Program of China University of Mining&Technology,Beijing(800015Z1117)
文摘Aiming to deal with the difficult issues of terrain data model simplification and crack disposal,the paper proposed an improved level of detail(LOD)terrain rendering algorithm,in which a variation coefficient of elevation is introduced to express the undulation of topography.Then the coefficient is used to construct a node evaluation function in the terrain data model simplification step.Furthermore,an edge reduction strategy is combined with the improved restrictive quadtree segmentation to handle the crack problem.The experiment results demonstrated that the proposed method can reduce the amount of rendering triangles and enhance the rendering speed on the premise of ensuring the rendering effect compared with a traditional LOD algorithm.
基金supported the National Science Foundation of China(61773051,61761166011,51705016)Beijing Natural Science Foundation(4172048)the Fundamental Research Funds for the Central Universities(2017JBZ003)
文摘The visual fidelity of bleeding simulation in a surgical simulator is critical since it will affect not only the degree of visual realism,but also the user’s medical judgment and treatment in real-life settings.The conventional marching cubes surface rendering algorithm provides excellent visual effect in rendering gushing blood,however,it is insufficient for blood flow,which is very common in surgical procedures,since in this case the rendered surface and depth textures of blood are rough.In this paper,we propose a new method called the mixed depth rendering for rendering blood flow in surgical simulation.A smooth height field is created to minimize the height difference between neighboring particles on the bleeding surface.The color and transparency of each bleeding area are determined by the number of bleeding particles,which is consistent with the real visual effect.In addition,there is no much extra computational cost.The rendering of blood flow in a variety of surgical scenarios shows that visual feedback is much improved.The proposed mixed depth rendering method is also used in a neurosurgery simulator that we developed.
基金supported by the Program for New Century Excellent Talents in University of Ministry of Education of China(Grant No.NCET-13-0927)the International Science&Technology Cooperation Program of China(Grant No.2012DFR50460)+1 种基金the National Natural Science Foundation of China(Grant Nos.21101111 and 61274056)the Shanxi Provincial Key Innovative Research Team in Science and Technology,China(Grant No.2012041011)
文摘Levofloxacin (LOFX), which is well-known as an antibiotic medicament, was shown to be useful as a 452-nm blue emitter for white organic light-emitting diodes (OLEDs). In this paper, the fabricated white OLED contains a 452-nm blue emitting layer (thickness of 30 nm) with 1 wt% LOFX doped in CBP (4,4'-bis(carbazol-9-yl)biphenyl) host and a 584-nm orange emitting layer (thickness of 10 nm) with 0.8 wt% DCJTB (4-(dicyanomethylene)-2-tert-butyl-6-(1,1,7,7- tetramethyljulolidin-4-yl-vinyl)-4H-pyran) doped in CBE which are separated by a 20-nm-thick buffer layer of TPBi (2,2',2"-(benzene-1,3,5-triyl)-tri(1-phenyl-lH-benzimidazole). A high color rendering index (CRI) of 84.5 and CIE chromaticity coordinates of (0.33, 0.32), which is close to ideal white emission CIE (0.333, 0.333), are obtained at a bias voltage of 14 V. Taking into account that LOFX is less expensive and the synthesis and purification technologies of LOFX are mature, these results indicate that blue fluorescence emitting LOFX is useful for applications to white OLEDs although the maximum current efficiency and luminance are not high. The present paper is expected to become a milestone to using medical drug materials for OLEDs.
基金Supported by the Natural Science Foundation of Fujian Province(No.2020J01652)the Undergraduate Innovation and Entrepreneurship Training Program of Fujian Medical University(No.YC2003)。
文摘AIM:To compare the damage of light-emitting diodes(LEDs)with different color rendering indexes(CRIs)to the ocular surface and retina of rats.METHODS:Totally 20 Sprague-Dawley(SD)rats were randomly divided into four groups:the first group was normal control group without any intervention,other three groups were exposed by LEDs with low(LED-L),medium(LED-M),and high(LED-H)CRI respectively for 12 h a day,continuously for 4 wk.The changes in tear secretion(Schirmer I test,SIt),tear film break-up time(BUT),and corneal fluorescein sodium staining(CFS)scores were compared at different times(1 d before experiment,2 and 4 wk after the experiment).The histopathological changes of rat lacrimal gland and retina were observed at 4 wk,and the expressions of tumor necrosis factor-α(TNF-α)and interleukin-6(IL-6)in lacrimal gland were detected by immunofluorescence method.RESULTS:With the increase of light exposed time,the CFS value of each light exposed group continued to increase,and the BUT and SIt scores continued to decrease,which were different from the control group,and the differences between the light exposed groups were statistically significant.Hematoxylin-eosin(HE)results showed that the lacrimal glands of each exposed group were seen varying degrees of acinar atrophy,vacuoledistribution,increasing of eosinophil granules,etc.;the retina showed obvious reduction of photoreceptor cell layer and changes in retinal thickness;LED-L group has the most significant change in all tests.Immunofluorescence suggested that the positive expressions of TNF-αand IL-6 in the lacrimal glands of each exposed group were higher than those of the control group.CONCLUSION:LED exposure for 4 wk can cause the pathological changes of lacrimal gland and retina of rats,and increase the expression of TNF-αand IL-6 in lacrimal gland,the degree of damage is negatively correlated with the CRI.
基金The National Natural Science Foundation of China(No.61473088)Six Talent Peaks Projects in Jiangsu Province
文摘To improve the sense of reality on perception, an improved algorithm of 3D shape haptic rendering is put forward based on a finger mounted vibrotactile device. The principle is that the interactive information and the shape information are conveyed to users when they touch virtual objects at mobile terminals by attaching the vibrotactile feedback on a fingertip. The extraction of shape characteristics, the interactive information and the mapping of shape in formation of vibration stimulation are key parts of the proposed algorithm to realize the real tactile rendering. The contact status of the interaction process, the height information and local gradient of the touch point are regarded as shape information and used to control the vibration intension, rhythm and distribution of the vibrators. With different contact status and shape information, the vibration pattern can be adjusted in time to imitate the outlines of virtual objects. Finally, the effectiveness of the algorithm is verified by shape perception experiments. The results show that the improved algorithm is effective for 3D shape haptic rendering.