Video cameras are common at volcano observatories,but their utility is often limited during periods of crisis due to the large data volume from continuous acquisition and time requirements for manual analysis.For came...Video cameras are common at volcano observatories,but their utility is often limited during periods of crisis due to the large data volume from continuous acquisition and time requirements for manual analysis.For cameras to serve as effective monitoring tools,video frames must be synthesized into relevant time series signals and further analyzed to classify and characterize observable activity.In this study,we use computer vision and machine learning algorithms to identify periods of volcanic activity and quantify plume rise velocities from video observations.Data were collected at Villarrica Volcano,Chile from two visible band cameras located^17 km from the vent that recorded at 0.1 and 30 frames per second between February and April 2015.Over these two months,Villarrica exhibited a diverse range of eruptive activity,including a paroxysmal eruption on 3 March.Prior to and after the eruption,activity included nighttime incandescence,dark and light emissions,inactivity,and periods of cloud cover.We quantify the color and spatial extent of plume emissions using a blob detection algorithm,whose outputs are fed into a trained artificial neural network that categorizes the observable activity into five classes.Activity shifts from primarily nighttime incandescence to ash emissions following the 3 March paroxysm,which likely relates to the reemergence of the buried lava lake.Time periods exhibiting plume emissions are further analyzed using a row and column projection algorithm that identifies plume onsets and calculates apparent plume horizontal and vertical rise velocities.Plume onsets are episodic,occurring with an average period of^50 s and suggests a puffing style of degassing,which is commonly observed at Villarrica.However,the lack of clear acoustic transients in the accompanying infrasound record suggests puffing may be controlled by atmospheric effects rather than a degassing regime at the vent.Methods presented here offer a generalized toolset for volcano monitors to classify and track emission statistics at a variety of volcanoes to better monitor periods of unrest and ultimately forecast major eruptions.展开更多
This paper proposes a methodology for using multi-modal data in gameplay to detect outlier behavior.The proposedmethodology collects,synchronizes,and quantifies time-series data fromwebcams,mouses,and keyboards.Facial...This paper proposes a methodology for using multi-modal data in gameplay to detect outlier behavior.The proposedmethodology collects,synchronizes,and quantifies time-series data fromwebcams,mouses,and keyboards.Facial expressions are varied on a one-dimensional pleasure axis,and changes in expression in the mouth and eye areas are detected separately.Furthermore,the keyboard and mouse input frequencies are tracked to determine the interaction intensity of users.Then,we apply a dynamic time warp algorithm to detect outlier behavior.The detected outlier behavior graph patterns were the play patterns that the game designer did not intend or play patterns that differed greatly from those of other users.These outlier patterns can provide game designers with feedback on the actual play experiences of users of the game.Our results can be applied to the game industry as game user experience analysis,enabling a quantitative evaluation of the excitement of a game.展开更多
In recent years, many image-based rendering techniques have advanced from static to dynamic scenes and thus become video-based rendering (VBR) methods. But actually, only a few of them can render new views on-line. ...In recent years, many image-based rendering techniques have advanced from static to dynamic scenes and thus become video-based rendering (VBR) methods. But actually, only a few of them can render new views on-line. We present a new VBR system that creates new views of a live dynamic scene. This system provides high quality images and does not require any background subtraction. Our method follows a plane-sweep approach and reaches real-time rendering using consumer graphic hardware, graphics processing unit (GPU). Only one computer is used for both acquisition and rendering. The video stream acquisition is performed by at least 3 webcams. We propose an additional video stream management that extends the number of webcams to 10 or more. These considerations make our system low-cost and hence accessible for everyone. We also present an adaptation of our plane-sweep method to create simultaneously multiple views of the scene in real-time. Our system is especially designed for stereovision using autostereoscopic displays. The new views are computed from 4 webcams connected to a computer and are compressed in order to be transfered to a mobile phone. Using GPU programming, our method provides up to 16 images of the scene in real-time. The use of both GPU and CPU makes this method work on only one consumer grade computer.展开更多
基金partially supported by National Science Foundation grant EAR-0838562 and EAR1830976。
文摘Video cameras are common at volcano observatories,but their utility is often limited during periods of crisis due to the large data volume from continuous acquisition and time requirements for manual analysis.For cameras to serve as effective monitoring tools,video frames must be synthesized into relevant time series signals and further analyzed to classify and characterize observable activity.In this study,we use computer vision and machine learning algorithms to identify periods of volcanic activity and quantify plume rise velocities from video observations.Data were collected at Villarrica Volcano,Chile from two visible band cameras located^17 km from the vent that recorded at 0.1 and 30 frames per second between February and April 2015.Over these two months,Villarrica exhibited a diverse range of eruptive activity,including a paroxysmal eruption on 3 March.Prior to and after the eruption,activity included nighttime incandescence,dark and light emissions,inactivity,and periods of cloud cover.We quantify the color and spatial extent of plume emissions using a blob detection algorithm,whose outputs are fed into a trained artificial neural network that categorizes the observable activity into five classes.Activity shifts from primarily nighttime incandescence to ash emissions following the 3 March paroxysm,which likely relates to the reemergence of the buried lava lake.Time periods exhibiting plume emissions are further analyzed using a row and column projection algorithm that identifies plume onsets and calculates apparent plume horizontal and vertical rise velocities.Plume onsets are episodic,occurring with an average period of^50 s and suggests a puffing style of degassing,which is commonly observed at Villarrica.However,the lack of clear acoustic transients in the accompanying infrasound record suggests puffing may be controlled by atmospheric effects rather than a degassing regime at the vent.Methods presented here offer a generalized toolset for volcano monitors to classify and track emission statistics at a variety of volcanoes to better monitor periods of unrest and ultimately forecast major eruptions.
基金This research was supported by the Basic Science Research Program through the National Research Foundation of Korea(NRF)funded by the Ministry of Education(2021R1I1A3058103).
文摘This paper proposes a methodology for using multi-modal data in gameplay to detect outlier behavior.The proposedmethodology collects,synchronizes,and quantifies time-series data fromwebcams,mouses,and keyboards.Facial expressions are varied on a one-dimensional pleasure axis,and changes in expression in the mouth and eye areas are detected separately.Furthermore,the keyboard and mouse input frequencies are tracked to determine the interaction intensity of users.Then,we apply a dynamic time warp algorithm to detect outlier behavior.The detected outlier behavior graph patterns were the play patterns that the game designer did not intend or play patterns that differed greatly from those of other users.These outlier patterns can provide game designers with feedback on the actual play experiences of users of the game.Our results can be applied to the game industry as game user experience analysis,enabling a quantitative evaluation of the excitement of a game.
基金This work was supported by Foundation of Technology Supporting the Creation of Digital Media Contents project (CREST, JST), Japan
文摘In recent years, many image-based rendering techniques have advanced from static to dynamic scenes and thus become video-based rendering (VBR) methods. But actually, only a few of them can render new views on-line. We present a new VBR system that creates new views of a live dynamic scene. This system provides high quality images and does not require any background subtraction. Our method follows a plane-sweep approach and reaches real-time rendering using consumer graphic hardware, graphics processing unit (GPU). Only one computer is used for both acquisition and rendering. The video stream acquisition is performed by at least 3 webcams. We propose an additional video stream management that extends the number of webcams to 10 or more. These considerations make our system low-cost and hence accessible for everyone. We also present an adaptation of our plane-sweep method to create simultaneously multiple views of the scene in real-time. Our system is especially designed for stereovision using autostereoscopic displays. The new views are computed from 4 webcams connected to a computer and are compressed in order to be transfered to a mobile phone. Using GPU programming, our method provides up to 16 images of the scene in real-time. The use of both GPU and CPU makes this method work on only one consumer grade computer.