This paper describes a level-of-detail rendering technique for large-scale irregular volume datasets.It is well known that the memory bandwidth consumed by visibility sorting becomes the limiting factor when carrying ...This paper describes a level-of-detail rendering technique for large-scale irregular volume datasets.It is well known that the memory bandwidth consumed by visibility sorting becomes the limiting factor when carrying out volume rendering of such datasets.To develop a sorting-free volume rendering technique,we previously proposed a particle-based technique that generates opaque and emissive particles using a density function constant within an irregular volume cell and projects the particles onto an image plane with sub-pixels.When the density function changes significantly in an irregular volume cell,the cell boundary may become prominent,which can cause blocky noise.When the number of the sub-pixels increases,the required frame buffer tends to be large.To solve this problem,this work proposes a new particle-based volume rendering which generates particles using metropolis sampling and renders the particles using the ensemble average. To confirm the effectiveness of this method,we applied our proposed technique to several irregular volume datasets,with the result that the ensemble average outperforms the sub-pixel average in computational complexity and memory usage. In addition,the ensemble average technique allowed us to implement a level of detail in the interactive rendering of a 71-million-cell hexahedral volume dataset and a 26-million-cell quadratic tetrahedral volume dataset.展开更多
It has become critical to develop explanatory models using partial differential equations(PDEs)for big data from interesting scientific phenomena,e.g.,fluids,atmosphere,and cosmic phenomena.Big data are often measured...It has become critical to develop explanatory models using partial differential equations(PDEs)for big data from interesting scientific phenomena,e.g.,fluids,atmosphere,and cosmic phenomena.Big data are often measured at discrete spatiotemporal points.In this paper,we assume a PDE and consider a set of the PDE solution data at multiple discrete points as pseudo-measurement data.In a data-driven PDE derivation,it is assumed that the PDE is a linear regression model comprising partial temporal and spatial differential terms,where the coefficients are estimated using regression analysis techniques,and the PDE derivation accuracy is defined as the difference(i.e.,error)between the exact and estimated coefficients.A spatiotemporal model is required to calculate the partial temporal and spatial differential terms;thus,we employ a spatiotemporal model based on a neural network(NN)obtained from big data.To develop the data-driven PDE derivation technique,we employ PDEs with exact solutions such that we can calculate the differential terms by differentiating them analytically at multiple discrete points.If we assume an activation function in the NN model,the partial differential term can be derived using a chain rule.The NN model accuracy is measured by the loss function and error between the exact and estimated partial differential terms.In addition,we clarify a requirement for an NN structure that can maximize the PDE derivation and NN model accuracy by varying the NN meta-parameters,i.e.,the numbers of NN layers and neurons.展开更多
(1)This article,published on 20 June 2013,had the following title:A VASUALIZATION FOR THE DYNAMIC BEHAVIORS OF THE MIXTURE OF WATER MASS FOR NORTHWESTERN PACIFIC NEAR JAPAN Here“VASUALIZATION”should be“VISUALIZATIO...(1)This article,published on 20 June 2013,had the following title:A VASUALIZATION FOR THE DYNAMIC BEHAVIORS OF THE MIXTURE OF WATER MASS FOR NORTHWESTERN PACIFIC NEAR JAPAN Here“VASUALIZATION”should be“VISUALIZATION”.展开更多
Recent studies are focusing on the distribution of water mass because the mixture region of water mass is highly related to the rich fishing grounds[Yasuda I.,Watanabe Y.,Fish.Oceanogr.3(3):172–181,1994].Due to the l...Recent studies are focusing on the distribution of water mass because the mixture region of water mass is highly related to the rich fishing grounds[Yasuda I.,Watanabe Y.,Fish.Oceanogr.3(3):172–181,1994].Due to the large data size and time-varying property,efficient exploration and visualization of the ocean data is always extremely challenging.To extract the dynamic behaviors of the water mass and its mixture from a large-scale simulated ocean dataset,we developed an efficient visualization system by applying our volume compression method and our volume rendering method.This system allows us to investigate the time-varying distributions of ocean physical properties,additionally from the user’s perspective and requirements.In the experiments,we show the generality and expressiveness by applying our system for single-and multi-property visualizations to find some significant ocean water mass.Consequently,we could obtain a clear visualization result to show the dynamic behaviors of the mixture of water mass for simulation data regarding a location in the northwestern Pacific near Japan.展开更多
To improve visualization,it is necessary to optimize the design by analyzing the behavior of users as well as improving the evaluation index of the computational experiment and the task performance(e.g.,the correct an...To improve visualization,it is necessary to optimize the design by analyzing the behavior of users as well as improving the evaluation index of the computational experiment and the task performance(e.g.,the correct answer rate and completion time)in the user experiment.Although various studies have investigated the influence of user behavior on the evaluation of visualization,majority of these studies focused on simple visualization tasks.A simple task does not indicate a simple visualization comprising a few visualization elements but a task in which the information obtained from visualization is the only clue for completing the task.However,a few studies have targeted complicated tasks in which multiple information obtained from visualization is considered to be a clue for completing the task regardless of the number of elements that are contained in the visualization.Therefore,in this study,we investigated the behavior of the participants who have performed complicated tasks.We selected two types of group-in-a-box(GIB)layouts,which can be considered to be a complicated visualization method,as the target of the user experiment.In the user experiment,participants were asked to perform an exploration task specific to GIB layouts;which group has the maximum number of intra-edges?We also collected the eye-tracking data in addition to task performance.The results showed that the correct answer rate is considerably affected by the visualization factor;whether the correct answer,the box with maximum number of intra-edges,is the box with the largest area.Furthermore,an analysis of the collected eye-tracking data revealed that this visualization factor affected the exploration behavior of the participants;however,it did not affect the location at which the participants were focused on.The obtained results indicated that the visualization elements that were not considered by the visualization designer can influence the task of extracting information from the data.Therefore,designers have to configure the visualization by considering the visual cognitive behavior of the users.展开更多
文摘This paper describes a level-of-detail rendering technique for large-scale irregular volume datasets.It is well known that the memory bandwidth consumed by visibility sorting becomes the limiting factor when carrying out volume rendering of such datasets.To develop a sorting-free volume rendering technique,we previously proposed a particle-based technique that generates opaque and emissive particles using a density function constant within an irregular volume cell and projects the particles onto an image plane with sub-pixels.When the density function changes significantly in an irregular volume cell,the cell boundary may become prominent,which can cause blocky noise.When the number of the sub-pixels increases,the required frame buffer tends to be large.To solve this problem,this work proposes a new particle-based volume rendering which generates particles using metropolis sampling and renders the particles using the ensemble average. To confirm the effectiveness of this method,we applied our proposed technique to several irregular volume datasets,with the result that the ensemble average outperforms the sub-pixel average in computational complexity and memory usage. In addition,the ensemble average technique allowed us to implement a level of detail in the interactive rendering of a 71-million-cell hexahedral volume dataset and a 26-million-cell quadratic tetrahedral volume dataset.
文摘It has become critical to develop explanatory models using partial differential equations(PDEs)for big data from interesting scientific phenomena,e.g.,fluids,atmosphere,and cosmic phenomena.Big data are often measured at discrete spatiotemporal points.In this paper,we assume a PDE and consider a set of the PDE solution data at multiple discrete points as pseudo-measurement data.In a data-driven PDE derivation,it is assumed that the PDE is a linear regression model comprising partial temporal and spatial differential terms,where the coefficients are estimated using regression analysis techniques,and the PDE derivation accuracy is defined as the difference(i.e.,error)between the exact and estimated coefficients.A spatiotemporal model is required to calculate the partial temporal and spatial differential terms;thus,we employ a spatiotemporal model based on a neural network(NN)obtained from big data.To develop the data-driven PDE derivation technique,we employ PDEs with exact solutions such that we can calculate the differential terms by differentiating them analytically at multiple discrete points.If we assume an activation function in the NN model,the partial differential term can be derived using a chain rule.The NN model accuracy is measured by the loss function and error between the exact and estimated partial differential terms.In addition,we clarify a requirement for an NN structure that can maximize the PDE derivation and NN model accuracy by varying the NN meta-parameters,i.e.,the numbers of NN layers and neurons.
文摘(1)This article,published on 20 June 2013,had the following title:A VASUALIZATION FOR THE DYNAMIC BEHAVIORS OF THE MIXTURE OF WATER MASS FOR NORTHWESTERN PACIFIC NEAR JAPAN Here“VASUALIZATION”should be“VISUALIZATION”.
基金supported by“Hakodate Marine Bio Cluster Project”in the knowledge Cluster Program from 2009a Grant-in-Aid for University and Society Collaboration from the Ministry of Education,Culture,Sports,Science and Technology(MEXT)+1 种基金a Grant-in-Aid for the Research Program on Climate Change Adaptation(RECCA)a Japan and France JST-ANR joint Grand-in-Aid for the PetaFlow project.
文摘Recent studies are focusing on the distribution of water mass because the mixture region of water mass is highly related to the rich fishing grounds[Yasuda I.,Watanabe Y.,Fish.Oceanogr.3(3):172–181,1994].Due to the large data size and time-varying property,efficient exploration and visualization of the ocean data is always extremely challenging.To extract the dynamic behaviors of the water mass and its mixture from a large-scale simulated ocean dataset,we developed an efficient visualization system by applying our volume compression method and our volume rendering method.This system allows us to investigate the time-varying distributions of ocean physical properties,additionally from the user’s perspective and requirements.In the experiments,we show the generality and expressiveness by applying our system for single-and multi-property visualizations to find some significant ocean water mass.Consequently,we could obtain a clear visualization result to show the dynamic behaviors of the mixture of water mass for simulation data regarding a location in the northwestern Pacific near Japan.
基金JST CREST,Japan Grant Number JPMJCR1511,Japan,and The Keihanshin Consortium for Fostering the Next Generation of Global Leaders in Research(K-CONNEX),established by Human Resource Development Program for Science and Technology,MEXT.
文摘To improve visualization,it is necessary to optimize the design by analyzing the behavior of users as well as improving the evaluation index of the computational experiment and the task performance(e.g.,the correct answer rate and completion time)in the user experiment.Although various studies have investigated the influence of user behavior on the evaluation of visualization,majority of these studies focused on simple visualization tasks.A simple task does not indicate a simple visualization comprising a few visualization elements but a task in which the information obtained from visualization is the only clue for completing the task.However,a few studies have targeted complicated tasks in which multiple information obtained from visualization is considered to be a clue for completing the task regardless of the number of elements that are contained in the visualization.Therefore,in this study,we investigated the behavior of the participants who have performed complicated tasks.We selected two types of group-in-a-box(GIB)layouts,which can be considered to be a complicated visualization method,as the target of the user experiment.In the user experiment,participants were asked to perform an exploration task specific to GIB layouts;which group has the maximum number of intra-edges?We also collected the eye-tracking data in addition to task performance.The results showed that the correct answer rate is considerably affected by the visualization factor;whether the correct answer,the box with maximum number of intra-edges,is the box with the largest area.Furthermore,an analysis of the collected eye-tracking data revealed that this visualization factor affected the exploration behavior of the participants;however,it did not affect the location at which the participants were focused on.The obtained results indicated that the visualization elements that were not considered by the visualization designer can influence the task of extracting information from the data.Therefore,designers have to configure the visualization by considering the visual cognitive behavior of the users.