The knowledge of wing orientation and deformation during flapping flight is necessary for a complete aerodynamic analysis, but to date those kinematic features have not been simultaneously quantified for free-flying i...The knowledge of wing orientation and deformation during flapping flight is necessary for a complete aerodynamic analysis, but to date those kinematic features have not been simultaneously quantified for free-flying insects. A projected comb-fringe (PCF) method has been developed for measuring spanwise camber changes on free-flying dragonflies and on beating-flying dragonflies through the course of a wingbeat, which bases on projecting a fringe pattern over the whole measurement area and then measuring the wing deformation from the distorted fringe pattern. Experimental results demonstrate substantial camber changes both along the wingspan and through the course of a wingbeat. The ratio of camber deformation to chord length for hind wing is up to 0.11 at 75% spanwise with a flapping angle of -0.66 degree for a free-flying dragonfly.展开更多
Three-dimensional(3D)imaging with structured light is crucial in diverse scenarios,ranging from intelligent manufacturing and medicine to entertainment.However,current structured light methods rely on projector-camera...Three-dimensional(3D)imaging with structured light is crucial in diverse scenarios,ranging from intelligent manufacturing and medicine to entertainment.However,current structured light methods rely on projector-camera synchronization,limiting the use of affordable imaging devices and their consumer applications.In this work,we introduce an asynchronous structured light imaging approach based on generative deep neural networks to relax the synchronization constraint,accomplishing the challenges of fringe pattern aliasing,without relying on any a priori constraint of the projection system.To overcome this need,we propose a generative deep neural network with U-Net-like encoder-decoder architecture to learn the underlying fringe features directly by exploring the intrinsic prior principles in the fringe pattern aliasing.We train within an adversarial learning framework and supervise the network training via a statisticsinformed loss function.We demonstrate that by evaluating the performance on fields of intensity,phase,and 3D reconstruction.It is shown that the trained network can separate aliased fringe patterns for producing comparable results with the synchronous one:the absolute error is no greater than 8μm,and the standard deviation does not exceed 3μm.Evaluation results on multiple objects and pattern types show it could be generalized for any asynchronous structured light scene.展开更多
文摘The knowledge of wing orientation and deformation during flapping flight is necessary for a complete aerodynamic analysis, but to date those kinematic features have not been simultaneously quantified for free-flying insects. A projected comb-fringe (PCF) method has been developed for measuring spanwise camber changes on free-flying dragonflies and on beating-flying dragonflies through the course of a wingbeat, which bases on projecting a fringe pattern over the whole measurement area and then measuring the wing deformation from the distorted fringe pattern. Experimental results demonstrate substantial camber changes both along the wingspan and through the course of a wingbeat. The ratio of camber deformation to chord length for hind wing is up to 0.11 at 75% spanwise with a flapping angle of -0.66 degree for a free-flying dragonfly.
基金funding from the National Natural Science Foundation of China(Grant Nos.62375078 and 12002197)the Youth Talent Launching Program of Shanghai University+2 种基金the General Science Foundation of Henan Province(Grant No.222300420427)the Key Research Project Plan for Higher Education Institutions in Henan Province(Grant No.24ZX011)the National Key Laboratory of Ship Structural Safety
文摘Three-dimensional(3D)imaging with structured light is crucial in diverse scenarios,ranging from intelligent manufacturing and medicine to entertainment.However,current structured light methods rely on projector-camera synchronization,limiting the use of affordable imaging devices and their consumer applications.In this work,we introduce an asynchronous structured light imaging approach based on generative deep neural networks to relax the synchronization constraint,accomplishing the challenges of fringe pattern aliasing,without relying on any a priori constraint of the projection system.To overcome this need,we propose a generative deep neural network with U-Net-like encoder-decoder architecture to learn the underlying fringe features directly by exploring the intrinsic prior principles in the fringe pattern aliasing.We train within an adversarial learning framework and supervise the network training via a statisticsinformed loss function.We demonstrate that by evaluating the performance on fields of intensity,phase,and 3D reconstruction.It is shown that the trained network can separate aliased fringe patterns for producing comparable results with the synchronous one:the absolute error is no greater than 8μm,and the standard deviation does not exceed 3μm.Evaluation results on multiple objects and pattern types show it could be generalized for any asynchronous structured light scene.