Important in many different sectors of the industry, the determination of stream velocity has become more and more important due to measurements precision necessity, in order to determine the right production rates, d...Important in many different sectors of the industry, the determination of stream velocity has become more and more important due to measurements precision necessity, in order to determine the right production rates, determine the volumetric production of undesired fluid, establish automated controls based on these measurements avoiding over-flooding or over-production, guaranteeing accurate predictive maintenance, etc. Difficulties being faced have been the determination of the velocity of specific fluids embedded in some others, for example, determining the gas bubbles stream velocity flowing throughout liquid fluid phase. Although different and already applicable methods have been researched and already implemented within the industry, a non-intrusive automated way of providing those stream velocities has its importance, and may have a huge impact in projects budget. Knowing the importance of its determination, this developed script uses a methodology of breaking-down real-time videos media into frame images, analyzing by pixel correlations possible superposition matches for further gas bubbles stream velocity estimation. In raw sense, the script bases itself in functions and procedures already available in MatLab, which can be used for image processing and treatments, allowing the methodology to be implemented. Its accuracy after the running test was of around 97% (ninety-seven percent);the raw source code with comments had almost 3000 (three thousand) characters;and the hardware placed for running the code was an Intel Core Duo 2.13 [Ghz] and 2 [Gb] RAM memory capable workstation. Even showing good results, it could be stated that just the end point correlations were actually getting to the final solution. So that, making use of self-learning functions or neural network, one could surely enhance the capability of the application to be run in real-time without getting exhaust by iterative loops.展开更多
The alpha stable self-similar stochastic process has been proved an effective model for high variable data traffic. A deep insight into some special issues and considerations on use of the process to model aggregated ...The alpha stable self-similar stochastic process has been proved an effective model for high variable data traffic. A deep insight into some special issues and considerations on use of the process to model aggregated VBR video traffic is made. Different methods to estimate stability parameter a and self-similar parameter H are compared. Processes to generate the linear fractional stable noise (LFSN) and the alpha stable random variables are provided. Model construction and the quantitative comparisons with fractional Brown motion (FBM) and real traffic are also examined. Open problems and future directions are also given with thoughtful discussions.展开更多
Object detection plays a vital role in the video surveillance systems.To enhance security,surveillance cameras are now installed in public areas such as traffic signals,roadways,retail malls,train stations,and banks.Ho...Object detection plays a vital role in the video surveillance systems.To enhance security,surveillance cameras are now installed in public areas such as traffic signals,roadways,retail malls,train stations,and banks.However,monitor-ing the video continually at a quicker pace is a challenging job.As a consequence,security cameras are useless and need human monitoring.The primary difficulty with video surveillance is identifying abnormalities such as thefts,accidents,crimes,or other unlawful actions.The anomalous action does not occur at a high-er rate than usual occurrences.To detect the object in a video,first we analyze the images pixel by pixel.In digital image processing,segmentation is the process of segregating the individual image parts into pixels.The performance of segmenta-tion is affected by irregular illumination and/or low illumination.These factors highly affect the real-time object detection process in the video surveillance sys-tem.In this paper,a modified ResNet model(M-Resnet)is proposed to enhance the image which is affected by insufficient light.Experimental results provide the comparison of existing method output and modification architecture of the ResNet model shows the considerable amount improvement in detection objects in the video stream.The proposed model shows better results in the metrics like preci-sion,recall,pixel accuracy,etc.,andfinds a reasonable improvement in the object detection.展开更多
Currently,worldwide industries and communities are concerned with building,expanding,and exploring the assets and resources found in the oceans and seas.More precisely,to analyze a stock,archaeology,and surveillance,s...Currently,worldwide industries and communities are concerned with building,expanding,and exploring the assets and resources found in the oceans and seas.More precisely,to analyze a stock,archaeology,and surveillance,sev-eral cameras are installed underseas to collect videos.However,on the other hand,these large size videos require a lot of time and memory for their processing to extract relevant information.Hence,to automate this manual procedure of video assessment,an accurate and efficient automated system is a greater necessity.From this perspective,we intend to present a complete framework solution for the task of video summarization and object detection in underwater videos.We employed a perceived motion energy(PME)method tofirst extract the keyframes followed by an object detection model approach namely YoloV3 to perform object detection in underwater videos.The issues of blurriness and low contrast in underwater images are also taken into account in the presented approach by applying the image enhancement method.Furthermore,the suggested framework of underwater video summarization and object detection has been evaluated on a publicly available brackish dataset.It is observed that the proposed framework shows good performance and hence ultimately assists several marine researchers or scientists related to thefield of underwater archaeology,stock assessment,and surveillance.展开更多
Video steganography plays an important role in secret communication that conceals a secret video in a cover video by perturbing the value of pixels in the cover frames.Imperceptibility is the first and foremost requir...Video steganography plays an important role in secret communication that conceals a secret video in a cover video by perturbing the value of pixels in the cover frames.Imperceptibility is the first and foremost requirement of any steganographic approach.Inspired by the fact that human eyes perceive pixel perturbation differently in different video areas,a novel effective and efficient Deeply‐Recursive Attention Network(DRANet)for video steganography to find suitable areas for information hiding via modelling spatio‐temporal attention is proposed.The DRANet mainly contains two important components,a Non‐Local Self‐Attention(NLSA)block and a Non‐Local Co‐Attention(NLCA)block.Specifically,the NLSA block can select the cover frame areas which are suitable for hiding by computing the correlations among inter‐and intra‐cover frames.The NLCA block aims to effectively produce the enhanced representations of the secret frames to enhance the robustness of the model and alleviate the influence of different areas in the secret video.Furthermore,the DRANet reduces the model parameters by performing similar operations on the different frames within an input video recursively.Experimental results show the proposed DRANet achieves better performance with fewer parameters than the state‐of‐the‐art competitors.展开更多
介绍一种应用于USB video camera中的自动对焦系统。由USB video camera获取的视频图像经计算机进行FFT运算或微分运算,得到其频谱幅值数据或微分幅值数据,计算机根据所得数据判断USB video camera中的镜头是否处于离焦位置并控制电机...介绍一种应用于USB video camera中的自动对焦系统。由USB video camera获取的视频图像经计算机进行FFT运算或微分运算,得到其频谱幅值数据或微分幅值数据,计算机根据所得数据判断USB video camera中的镜头是否处于离焦位置并控制电机将镜头移到对焦位置。文章还进一步讨论了提高自动对焦准确度的措施。实验结果表明该自动对焦系统能很好地实现USB video camera的自动对焦,该系统将使具有USB接口的video camera使用更简单方便。展开更多
针对当前方法普遍存在较为严重的细节结构信息丢失与事件间重叠的问题,提出一种基于双向特征金字塔的密集视频描述生成方法(dense video captioning with bilateral feature pyramid net,BFPVC)。BFPVC通过带有自底向上、自顶向下、横...针对当前方法普遍存在较为严重的细节结构信息丢失与事件间重叠的问题,提出一种基于双向特征金字塔的密集视频描述生成方法(dense video captioning with bilateral feature pyramid net,BFPVC)。BFPVC通过带有自底向上、自顶向下、横向链接3条分支的双向特征金字塔强化视频多尺度特征图,兼顾对时序信息、空间信息、语义信息的特征表示,解码器从强化后的视频特征中捕获更加全面的事件候选集,从而为对应的视频事件生成更加丰富、详尽的文本描述。在ActivityNet Captions数据集和YouCook2数据集上的实验结果表明,BFPVC与同类模型相比生成的文本描述更详细、丰富,验证了双向特征金字塔在密集视频描述领域的有效性。展开更多
基金financial support from the Brazilian Federal Agency for Support and Evaluation of Graduate Education(Coordenacao de Aperfeicoamento de Pessoal de Nivel Superior—CAPES,scholarship process no BEX 0506/15-0)the Brazilian National Agency of Petroleum,Natural Gas and Biofuels(Agencia Nacional do Petroleo,Gas Natural e Biocombustiveis—ANP),in cooperation with the Brazilian Financier of Studies and Projects(Financiadora de Estudos e Projetos—FINEP)the Brazilian Ministry of Science,Technology and Innovation(Ministério da Ciencia,Tecnologia e Inovacao—MCTI)through the ANP’s Human Resources Program of the State University of Sao Paulo(Universidade Estadual Paulista—UNESP)for the Oil and Gas Sector PRH-ANP/MCTI no 48(PRH48).
文摘Important in many different sectors of the industry, the determination of stream velocity has become more and more important due to measurements precision necessity, in order to determine the right production rates, determine the volumetric production of undesired fluid, establish automated controls based on these measurements avoiding over-flooding or over-production, guaranteeing accurate predictive maintenance, etc. Difficulties being faced have been the determination of the velocity of specific fluids embedded in some others, for example, determining the gas bubbles stream velocity flowing throughout liquid fluid phase. Although different and already applicable methods have been researched and already implemented within the industry, a non-intrusive automated way of providing those stream velocities has its importance, and may have a huge impact in projects budget. Knowing the importance of its determination, this developed script uses a methodology of breaking-down real-time videos media into frame images, analyzing by pixel correlations possible superposition matches for further gas bubbles stream velocity estimation. In raw sense, the script bases itself in functions and procedures already available in MatLab, which can be used for image processing and treatments, allowing the methodology to be implemented. Its accuracy after the running test was of around 97% (ninety-seven percent);the raw source code with comments had almost 3000 (three thousand) characters;and the hardware placed for running the code was an Intel Core Duo 2.13 [Ghz] and 2 [Gb] RAM memory capable workstation. Even showing good results, it could be stated that just the end point correlations were actually getting to the final solution. So that, making use of self-learning functions or neural network, one could surely enhance the capability of the application to be run in real-time without getting exhaust by iterative loops.
文摘The alpha stable self-similar stochastic process has been proved an effective model for high variable data traffic. A deep insight into some special issues and considerations on use of the process to model aggregated VBR video traffic is made. Different methods to estimate stability parameter a and self-similar parameter H are compared. Processes to generate the linear fractional stable noise (LFSN) and the alpha stable random variables are provided. Model construction and the quantitative comparisons with fractional Brown motion (FBM) and real traffic are also examined. Open problems and future directions are also given with thoughtful discussions.
文摘Object detection plays a vital role in the video surveillance systems.To enhance security,surveillance cameras are now installed in public areas such as traffic signals,roadways,retail malls,train stations,and banks.However,monitor-ing the video continually at a quicker pace is a challenging job.As a consequence,security cameras are useless and need human monitoring.The primary difficulty with video surveillance is identifying abnormalities such as thefts,accidents,crimes,or other unlawful actions.The anomalous action does not occur at a high-er rate than usual occurrences.To detect the object in a video,first we analyze the images pixel by pixel.In digital image processing,segmentation is the process of segregating the individual image parts into pixels.The performance of segmenta-tion is affected by irregular illumination and/or low illumination.These factors highly affect the real-time object detection process in the video surveillance sys-tem.In this paper,a modified ResNet model(M-Resnet)is proposed to enhance the image which is affected by insufficient light.Experimental results provide the comparison of existing method output and modification architecture of the ResNet model shows the considerable amount improvement in detection objects in the video stream.The proposed model shows better results in the metrics like preci-sion,recall,pixel accuracy,etc.,andfinds a reasonable improvement in the object detection.
基金supported by the National Research Foundation of Korea(NRF)grant funded by the Korea government(MSIT)(No.2020R1G1A1099559).
文摘Currently,worldwide industries and communities are concerned with building,expanding,and exploring the assets and resources found in the oceans and seas.More precisely,to analyze a stock,archaeology,and surveillance,sev-eral cameras are installed underseas to collect videos.However,on the other hand,these large size videos require a lot of time and memory for their processing to extract relevant information.Hence,to automate this manual procedure of video assessment,an accurate and efficient automated system is a greater necessity.From this perspective,we intend to present a complete framework solution for the task of video summarization and object detection in underwater videos.We employed a perceived motion energy(PME)method tofirst extract the keyframes followed by an object detection model approach namely YoloV3 to perform object detection in underwater videos.The issues of blurriness and low contrast in underwater images are also taken into account in the presented approach by applying the image enhancement method.Furthermore,the suggested framework of underwater video summarization and object detection has been evaluated on a publicly available brackish dataset.It is observed that the proposed framework shows good performance and hence ultimately assists several marine researchers or scientists related to thefield of underwater archaeology,stock assessment,and surveillance.
基金supported in part by NSFC(62002320,U19B2043,61672456)the Key R&D Program of Zhejiang Province,China(2021C01119).
文摘Video steganography plays an important role in secret communication that conceals a secret video in a cover video by perturbing the value of pixels in the cover frames.Imperceptibility is the first and foremost requirement of any steganographic approach.Inspired by the fact that human eyes perceive pixel perturbation differently in different video areas,a novel effective and efficient Deeply‐Recursive Attention Network(DRANet)for video steganography to find suitable areas for information hiding via modelling spatio‐temporal attention is proposed.The DRANet mainly contains two important components,a Non‐Local Self‐Attention(NLSA)block and a Non‐Local Co‐Attention(NLCA)block.Specifically,the NLSA block can select the cover frame areas which are suitable for hiding by computing the correlations among inter‐and intra‐cover frames.The NLCA block aims to effectively produce the enhanced representations of the secret frames to enhance the robustness of the model and alleviate the influence of different areas in the secret video.Furthermore,the DRANet reduces the model parameters by performing similar operations on the different frames within an input video recursively.Experimental results show the proposed DRANet achieves better performance with fewer parameters than the state‐of‐the‐art competitors.
文摘介绍一种应用于USB video camera中的自动对焦系统。由USB video camera获取的视频图像经计算机进行FFT运算或微分运算,得到其频谱幅值数据或微分幅值数据,计算机根据所得数据判断USB video camera中的镜头是否处于离焦位置并控制电机将镜头移到对焦位置。文章还进一步讨论了提高自动对焦准确度的措施。实验结果表明该自动对焦系统能很好地实现USB video camera的自动对焦,该系统将使具有USB接口的video camera使用更简单方便。
文摘针对当前方法普遍存在较为严重的细节结构信息丢失与事件间重叠的问题,提出一种基于双向特征金字塔的密集视频描述生成方法(dense video captioning with bilateral feature pyramid net,BFPVC)。BFPVC通过带有自底向上、自顶向下、横向链接3条分支的双向特征金字塔强化视频多尺度特征图,兼顾对时序信息、空间信息、语义信息的特征表示,解码器从强化后的视频特征中捕获更加全面的事件候选集,从而为对应的视频事件生成更加丰富、详尽的文本描述。在ActivityNet Captions数据集和YouCook2数据集上的实验结果表明,BFPVC与同类模型相比生成的文本描述更详细、丰富,验证了双向特征金字塔在密集视频描述领域的有效性。