Pericytes,as the mural cells surrounding the microvasculature,play a critical role in the regulation of microcirculation;however,how these cells respond to ischemic stroke remains unclear.To determine the temporal alt...Pericytes,as the mural cells surrounding the microvasculature,play a critical role in the regulation of microcirculation;however,how these cells respond to ischemic stroke remains unclear.To determine the temporal alterations in pericytes after ischemia/reperfusion,we used the 1-hour middle cerebral artery occlusion model,which was examined at 2,12,and 24 hours after reperfusion.Our results showed that in the reperfused regions,the cerebral blood flow decreased and the infarct volume increased with time.Furthermore,the pericytes in the infarct regions contracted and acted on the vascular endothelial cells within 24 hours after reperfusion.These effects may result in incomplete microcirculation reperfusion and a gradual worsening trend with time in the acute phase.These findings provide strong evidence for explaining the“no-reflow”phenomenon that occurs after recanalization in clinical practice.展开更多
In this paper, we present Emotion-Aware Music Driven Movie Montage, a novel paradigm for the challenging task of generating movie montages. Specifically, given a movie and a piece of music as the guidance, our method ...In this paper, we present Emotion-Aware Music Driven Movie Montage, a novel paradigm for the challenging task of generating movie montages. Specifically, given a movie and a piece of music as the guidance, our method aims to generate a montage out of the movie that is emotionally consistent with the music. Unlike previous work such as video summarization, this task requires not only video content understanding, but also emotion analysis of both the input movie and music. To this end, we propose a two-stage framework, including a learning-based module for the prediction of emotion similarity and an optimization-based module for the selection and composition of candidate movie shots. The core of our method is to align and estimate emotional similarity between music clips and movie shots in a multi-modal latent space via contrastive learning. Subsequently, the montage generation is modeled as a joint optimization of emotion similarity and additional constraints such as scene-level story completeness and shot-level rhythm synchronization. We conduct both qualitative and quantitative evaluations to demonstrate that our method can generate emotionally consistent montages and outperforms alternative baselines.展开更多
基金financially supported by the China Academy of Chinese Medical Sciences Innovation Fund,No.CI2021A03407(to WZB)the National Natural Science Foundation of China,No.81973789(to FFC).
文摘Pericytes,as the mural cells surrounding the microvasculature,play a critical role in the regulation of microcirculation;however,how these cells respond to ischemic stroke remains unclear.To determine the temporal alterations in pericytes after ischemia/reperfusion,we used the 1-hour middle cerebral artery occlusion model,which was examined at 2,12,and 24 hours after reperfusion.Our results showed that in the reperfused regions,the cerebral blood flow decreased and the infarct volume increased with time.Furthermore,the pericytes in the infarct regions contracted and acted on the vascular endothelial cells within 24 hours after reperfusion.These effects may result in incomplete microcirculation reperfusion and a gradual worsening trend with time in the acute phase.These findings provide strong evidence for explaining the“no-reflow”phenomenon that occurs after recanalization in clinical practice.
基金supported by the National Key Research and Development Program of China under Grant No.2020AAA0106200 and the National Natural Science Foundation of China under Grant No.61832016.
文摘In this paper, we present Emotion-Aware Music Driven Movie Montage, a novel paradigm for the challenging task of generating movie montages. Specifically, given a movie and a piece of music as the guidance, our method aims to generate a montage out of the movie that is emotionally consistent with the music. Unlike previous work such as video summarization, this task requires not only video content understanding, but also emotion analysis of both the input movie and music. To this end, we propose a two-stage framework, including a learning-based module for the prediction of emotion similarity and an optimization-based module for the selection and composition of candidate movie shots. The core of our method is to align and estimate emotional similarity between music clips and movie shots in a multi-modal latent space via contrastive learning. Subsequently, the montage generation is modeled as a joint optimization of emotion similarity and additional constraints such as scene-level story completeness and shot-level rhythm synchronization. We conduct both qualitative and quantitative evaluations to demonstrate that our method can generate emotionally consistent montages and outperforms alternative baselines.