To determine the feasibility and practicability of interrupt continuous wave (CW) approach proposed for real time simulating radar intermediate frequency(IF) video signal, theoretical analysis and computer simulation...To determine the feasibility and practicability of interrupt continuous wave (CW) approach proposed for real time simulating radar intermediate frequency(IF) video signal, theoretical analysis and computer simulation were used. Phases at two linked points between the end and beginning of adjoined frames are always consistent; the bias Doppler frequency for the time delay of A/D sampling start responds to that for target acceleration. No digital phase compensation is required at continuous points, and the interrupt CW approach has apparently practical values.展开更多
This paper presents an algorithm for coding video signal based on 3-D wavelet transformation. When the frame order t of a video signal is replaced by order 2, the video signal can be looked as a block in 3-D space. Af...This paper presents an algorithm for coding video signal based on 3-D wavelet transformation. When the frame order t of a video signal is replaced by order 2, the video signal can be looked as a block in 3-D space. After splitting the block into smaller sub-blocks, imitate the method of 2-D wavelet transformation for images, we can transform the sub-blocks with 3-D wavelet. Most of video signal energy is in the decomposed low-frequency sub-bands. These sub-bands affect the visual quality of the video signal most. Quantizing different sub-bands with different precision and then entropy encoding each sub-band, we can eliminate inter- and intra-frame redundancy of the video signal and compress data. Our simulation experiments show that this algorithm can achieve very good result.展开更多
With the popularity of online learning and due to the significant influence of emotion on the learning effect,more and more researches focus on emotion recognition in online learning.Most of the current research uses ...With the popularity of online learning and due to the significant influence of emotion on the learning effect,more and more researches focus on emotion recognition in online learning.Most of the current research uses the comments of the learning platform or the learner’s expression for emotion recognition.The research data on other modalities are scarce.Most of the studies also ignore the impact of instructional videos on learners and the guidance of knowledge on data.Because of the need for other modal research data,we construct a synchronous multimodal data set for analyzing learners’emotional states in online learning scenarios.The data set recorded the eye movement data and photoplethysmography(PPG)signals of 68 subjects and the instructional video they watched.For the problem of ignoring the instructional videos on learners and ignoring the knowledge,a multimodal emotion recognition method in video learning based on knowledge enhancement is proposed.This method uses the knowledge-based features extracted from instructional videos,such as brightness,hue,saturation,the videos’clickthrough rate,and emotion generation time,to guide the emotion recognition process of physiological signals.This method uses Convolutional Neural Networks(CNN)and Long Short-Term Memory(LSTM)networks to extract deeper emotional representation and spatiotemporal information from shallow features.The model uses multi-head attention(MHA)mechanism to obtain critical information in the extracted deep features.Then,Temporal Convolutional Network(TCN)is used to learn the information in the deep features and knowledge-based features.Knowledge-based features are used to supplement and enhance the deep features of physiological signals.Finally,the fully connected layer is used for emotion recognition,and the recognition accuracy reaches 97.51%.Compared with two recent researches,the accuracy improved by 8.57%and 2.11%,respectively.On the four public data sets,our proposed method also achieves better results compared with the two recent researches.The experiment results show that the proposed multimodal emotion recognition method based on knowledge enhancement has good performance and robustness.展开更多
Psyllids,or jumping plant lice(Hemiptera:Sternorrhyncha:Psylloidea),are a group of small phytophagous insects that include some important pests of crops world-wide.Sexual communication of psyllids occurs via vibration...Psyllids,or jumping plant lice(Hemiptera:Sternorrhyncha:Psylloidea),are a group of small phytophagous insects that include some important pests of crops world-wide.Sexual communication of psyllids occurs via vibrations transmitted through host plants,which play an important role in mate recognition and localization.The signals are species-specific and can be used to aid in psyllid taxonomy and pest control.Sev-eral hypotheses have been proposed for the mechanism that generates these vibrations,of which stridulation,that is,friction between parts of the forewing and thorax,has re-ceived the most attention.We have investigated vibrational communication in the Euro-pean pear psyllid species Cacopsylla pyrisuga(Foerster,1848)using laser vibrometry and high-speed video recording,to directly observe the movements associated with signal pro-duction.We describe for the first time the basic characteristics of the signals and signal emission of this species.Based on observations and analysis of the video recordings us-ing a point-tracking algorithm,and their comparison with laser vibrometer recordings,we argue that males of C.pyrisuga produce the vibrations primarily by wing buzzing,that is,tremulation that does not involve friction between the wings and thorax.Comparing observed signal properties with previously published data,we predict that wing buzzing is the main mechanism of signal production in all vibrating psyllids.展开更多
文摘To determine the feasibility and practicability of interrupt continuous wave (CW) approach proposed for real time simulating radar intermediate frequency(IF) video signal, theoretical analysis and computer simulation were used. Phases at two linked points between the end and beginning of adjoined frames are always consistent; the bias Doppler frequency for the time delay of A/D sampling start responds to that for target acceleration. No digital phase compensation is required at continuous points, and the interrupt CW approach has apparently practical values.
文摘This paper presents an algorithm for coding video signal based on 3-D wavelet transformation. When the frame order t of a video signal is replaced by order 2, the video signal can be looked as a block in 3-D space. After splitting the block into smaller sub-blocks, imitate the method of 2-D wavelet transformation for images, we can transform the sub-blocks with 3-D wavelet. Most of video signal energy is in the decomposed low-frequency sub-bands. These sub-bands affect the visual quality of the video signal most. Quantizing different sub-bands with different precision and then entropy encoding each sub-band, we can eliminate inter- and intra-frame redundancy of the video signal and compress data. Our simulation experiments show that this algorithm can achieve very good result.
基金supported by the National Science Foundation of China (Grant Nos.62267001,61906051)。
文摘With the popularity of online learning and due to the significant influence of emotion on the learning effect,more and more researches focus on emotion recognition in online learning.Most of the current research uses the comments of the learning platform or the learner’s expression for emotion recognition.The research data on other modalities are scarce.Most of the studies also ignore the impact of instructional videos on learners and the guidance of knowledge on data.Because of the need for other modal research data,we construct a synchronous multimodal data set for analyzing learners’emotional states in online learning scenarios.The data set recorded the eye movement data and photoplethysmography(PPG)signals of 68 subjects and the instructional video they watched.For the problem of ignoring the instructional videos on learners and ignoring the knowledge,a multimodal emotion recognition method in video learning based on knowledge enhancement is proposed.This method uses the knowledge-based features extracted from instructional videos,such as brightness,hue,saturation,the videos’clickthrough rate,and emotion generation time,to guide the emotion recognition process of physiological signals.This method uses Convolutional Neural Networks(CNN)and Long Short-Term Memory(LSTM)networks to extract deeper emotional representation and spatiotemporal information from shallow features.The model uses multi-head attention(MHA)mechanism to obtain critical information in the extracted deep features.Then,Temporal Convolutional Network(TCN)is used to learn the information in the deep features and knowledge-based features.Knowledge-based features are used to supplement and enhance the deep features of physiological signals.Finally,the fully connected layer is used for emotion recognition,and the recognition accuracy reaches 97.51%.Compared with two recent researches,the accuracy improved by 8.57%and 2.11%,respectively.On the four public data sets,our proposed method also achieves better results compared with the two recent researches.The experiment results show that the proposed multimodal emotion recognition method based on knowledge enhancement has good performance and robustness.
基金The work was supported by the Slovenian Research and Innovation Agency(ARIS)through the core research funding program"Communities,interactions and communications in ecosystems"(P1-0255)awarded to the National Institute of Biology。
文摘Psyllids,or jumping plant lice(Hemiptera:Sternorrhyncha:Psylloidea),are a group of small phytophagous insects that include some important pests of crops world-wide.Sexual communication of psyllids occurs via vibrations transmitted through host plants,which play an important role in mate recognition and localization.The signals are species-specific and can be used to aid in psyllid taxonomy and pest control.Sev-eral hypotheses have been proposed for the mechanism that generates these vibrations,of which stridulation,that is,friction between parts of the forewing and thorax,has re-ceived the most attention.We have investigated vibrational communication in the Euro-pean pear psyllid species Cacopsylla pyrisuga(Foerster,1848)using laser vibrometry and high-speed video recording,to directly observe the movements associated with signal pro-duction.We describe for the first time the basic characteristics of the signals and signal emission of this species.Based on observations and analysis of the video recordings us-ing a point-tracking algorithm,and their comparison with laser vibrometer recordings,we argue that males of C.pyrisuga produce the vibrations primarily by wing buzzing,that is,tremulation that does not involve friction between the wings and thorax.Comparing observed signal properties with previously published data,we predict that wing buzzing is the main mechanism of signal production in all vibrating psyllids.