As image manipulation technology advances rapidly,the malicious use of image tampering has alarmingly escalated,posing a significant threat to social stability.In the realm of image tampering localization,accurately l...As image manipulation technology advances rapidly,the malicious use of image tampering has alarmingly escalated,posing a significant threat to social stability.In the realm of image tampering localization,accurately localizing limited samples,multiple types,and various sizes of regions remains a multitude of challenges.These issues impede the model’s universality and generalization capability and detrimentally affect its performance.To tackle these issues,we propose FL-MobileViT-an improved MobileViT model devised for image tampering localization.Our proposed model utilizes a dual-stream architecture that independently processes the RGB and noise domain,and captures richer traces of tampering through dual-stream integration.Meanwhile,the model incorporating the Focused Linear Attention mechanism within the lightweight network(MobileViT).This substitution significantly diminishes computational complexity and resolves homogeneity problems associated with traditional Transformer attention mechanisms,enhancing feature extraction diversity and improving the model’s localization performance.To comprehensively fuse the generated results from both feature extractors,we introduce the ASPP architecture for multi-scale feature fusion.This facilitates a more precise localization of tampered regions of various sizes.Furthermore,to bolster the model’s generalization ability,we adopt a contrastive learning method and devise a joint optimization training strategy that leverages fused features and captures the disparities in feature distribution in tampered images.This strategy enables the learning of contrastive loss at various stages of the feature extractor and employs it as an additional constraint condition in conjunction with cross-entropy loss.As a result,overfitting issues are effectively alleviated,and the differentiation between tampered and untampered regions is enhanced.Experimental evaluations on five benchmark datasets(IMD-20,CASIA,NIST-16,Columbia and Coverage)validate the effectiveness of our proposed model.The meticulously calibrated FL-MobileViT model consistently outperforms numerous existing general models regarding localization accuracy across diverse datasets,demonstrating superior adaptability.展开更多
This October, besides the usual China International Trade Fair for Apparel Fabrics and Accessories held in the city of Shanghai, another important event was set to take place in Shanghai—the 2009 Annual Conference of...This October, besides the usual China International Trade Fair for Apparel Fabrics and Accessories held in the city of Shanghai, another important event was set to take place in Shanghai—the 2009 Annual Conference of the International Textile Manufacturers Federation (ITMF), making Shanghai the place focusing all the attention of the global textile industries markedly.展开更多
Background: Dissociative attentional stimuli(e.g., music, video) are effective in decreasing ratings of perceived exertion(RPE) during low-tomoderate intensity exercise, but have inconsistent results during exercise a...Background: Dissociative attentional stimuli(e.g., music, video) are effective in decreasing ratings of perceived exertion(RPE) during low-tomoderate intensity exercise, but have inconsistent results during exercise at higher intensity. The purpose of this study was to assess attentional focus and RPE during high-intensity exercise as a function of being exposed to music, video, both(music and video), or a no-treatment control condition.Methods: During the first session, healthy men(n = 15) completed a maximal fitness test to determine the workload necessary for high-intensity exercise(operationalized as 125% ventilatory threshold) to be performed during subsequent sessions. On 4 subsequent days, they completed 20 min of high-intensity exercise in a no-treatment control condition or while listening to music, watching a video, or both. Attentional focus, RPE,heart rate, and distance covered were measured every 4 min during the exercise.Results: Music and video in combination resulted in significantly lower RPE across time(partial η~2= 0.36) and the size of the effect increased over time(partial η~2= 0.14). Additionally, music and video in combination resulted in a significantly more dissociative focus than the other conditions(partial η~2= 0.29).Conclusion: Music and video in combination may result in lower perceived exertion during high-intensity exercise when compared to music or video in isolation. Future research will be necessary to test if reductions in perceived exertion in response to dissociative attentional stimuli have implications for exercise adherence.展开更多
Listening to music manipulates attention to be more externally focused,which has the potential to improve muscular efficiency.This study aimed to determine the effect of listening to music on muscle activation during ...Listening to music manipulates attention to be more externally focused,which has the potential to improve muscular efficiency.This study aimed to determine the effect of listening to music on muscle activation during an isometric exercise task,and compare this effect to those of other attentional focus conditions.Apparently healthy subjects(n=35;16 men/19 women)completed an isometric elbow flexion task for 1 min in three randomized and counterbalanced conditions:internal focus(INT),external focus with a simple distraction task(EXT),or listening to music(MUS).Muscle activation of the biceps and triceps brachii and heart rate(HR)were recorded throughout the exercise tasks.Ratings of perceived exertion(RPE),affective valence,and motivation were measured at the end of each trial.There was no difference in muscle activation measures among the three conditions.HR during MUS was lower than EXT at 15 s([89.4±11.8]beats/min vs.[93.1±12.9]beats/min;p=0.018)and 30 s([90.6±12.4]beats/min vs.[94.2±12.5]beats/min;p=0.026),and lower than INT at 60 s([93.3±13.3]beats/min vs.[96.7±12.0]beats/min;p=0.016).Overall RPE was higher for INT(13.4±2.2)than for MUS([12.6±2.0];p=0.020)and EXT([11.94±2.22];p<0.001).Affective valence was higher for MUS than for INT([2.7±1.4]vs.[2.1±1.5];p=0.011).Manipulating attentional focus did not alter muscle activation for a light-intensity isometric muscular endurance task,though MUS was reported as more positive and requiring less exertion to complete than INT.Using music can therefore be recommended during light-intensity isometric exercise based on the psychological benefits observed.展开更多
基金This study was funded by the Science and Technology Project in Xi’an(No.22GXFW0123)this work was supported by the Special Fund Construction Project of Key Disciplines in Ordinary Colleges and Universities in Shaanxi Province,the authors would like to thank the anonymous reviewers for their helpful comments and suggestions.
文摘As image manipulation technology advances rapidly,the malicious use of image tampering has alarmingly escalated,posing a significant threat to social stability.In the realm of image tampering localization,accurately localizing limited samples,multiple types,and various sizes of regions remains a multitude of challenges.These issues impede the model’s universality and generalization capability and detrimentally affect its performance.To tackle these issues,we propose FL-MobileViT-an improved MobileViT model devised for image tampering localization.Our proposed model utilizes a dual-stream architecture that independently processes the RGB and noise domain,and captures richer traces of tampering through dual-stream integration.Meanwhile,the model incorporating the Focused Linear Attention mechanism within the lightweight network(MobileViT).This substitution significantly diminishes computational complexity and resolves homogeneity problems associated with traditional Transformer attention mechanisms,enhancing feature extraction diversity and improving the model’s localization performance.To comprehensively fuse the generated results from both feature extractors,we introduce the ASPP architecture for multi-scale feature fusion.This facilitates a more precise localization of tampered regions of various sizes.Furthermore,to bolster the model’s generalization ability,we adopt a contrastive learning method and devise a joint optimization training strategy that leverages fused features and captures the disparities in feature distribution in tampered images.This strategy enables the learning of contrastive loss at various stages of the feature extractor and employs it as an additional constraint condition in conjunction with cross-entropy loss.As a result,overfitting issues are effectively alleviated,and the differentiation between tampered and untampered regions is enhanced.Experimental evaluations on five benchmark datasets(IMD-20,CASIA,NIST-16,Columbia and Coverage)validate the effectiveness of our proposed model.The meticulously calibrated FL-MobileViT model consistently outperforms numerous existing general models regarding localization accuracy across diverse datasets,demonstrating superior adaptability.
文摘This October, besides the usual China International Trade Fair for Apparel Fabrics and Accessories held in the city of Shanghai, another important event was set to take place in Shanghai—the 2009 Annual Conference of the International Textile Manufacturers Federation (ITMF), making Shanghai the place focusing all the attention of the global textile industries markedly.
基金supported by the Theodore&Loretta Williams Graduate Research Award Fund for Arts Health at the University of North Carolina at Greensboro
文摘Background: Dissociative attentional stimuli(e.g., music, video) are effective in decreasing ratings of perceived exertion(RPE) during low-tomoderate intensity exercise, but have inconsistent results during exercise at higher intensity. The purpose of this study was to assess attentional focus and RPE during high-intensity exercise as a function of being exposed to music, video, both(music and video), or a no-treatment control condition.Methods: During the first session, healthy men(n = 15) completed a maximal fitness test to determine the workload necessary for high-intensity exercise(operationalized as 125% ventilatory threshold) to be performed during subsequent sessions. On 4 subsequent days, they completed 20 min of high-intensity exercise in a no-treatment control condition or while listening to music, watching a video, or both. Attentional focus, RPE,heart rate, and distance covered were measured every 4 min during the exercise.Results: Music and video in combination resulted in significantly lower RPE across time(partial η~2= 0.36) and the size of the effect increased over time(partial η~2= 0.14). Additionally, music and video in combination resulted in a significantly more dissociative focus than the other conditions(partial η~2= 0.29).Conclusion: Music and video in combination may result in lower perceived exertion during high-intensity exercise when compared to music or video in isolation. Future research will be necessary to test if reductions in perceived exertion in response to dissociative attentional stimuli have implications for exercise adherence.
文摘Listening to music manipulates attention to be more externally focused,which has the potential to improve muscular efficiency.This study aimed to determine the effect of listening to music on muscle activation during an isometric exercise task,and compare this effect to those of other attentional focus conditions.Apparently healthy subjects(n=35;16 men/19 women)completed an isometric elbow flexion task for 1 min in three randomized and counterbalanced conditions:internal focus(INT),external focus with a simple distraction task(EXT),or listening to music(MUS).Muscle activation of the biceps and triceps brachii and heart rate(HR)were recorded throughout the exercise tasks.Ratings of perceived exertion(RPE),affective valence,and motivation were measured at the end of each trial.There was no difference in muscle activation measures among the three conditions.HR during MUS was lower than EXT at 15 s([89.4±11.8]beats/min vs.[93.1±12.9]beats/min;p=0.018)and 30 s([90.6±12.4]beats/min vs.[94.2±12.5]beats/min;p=0.026),and lower than INT at 60 s([93.3±13.3]beats/min vs.[96.7±12.0]beats/min;p=0.016).Overall RPE was higher for INT(13.4±2.2)than for MUS([12.6±2.0];p=0.020)and EXT([11.94±2.22];p<0.001).Affective valence was higher for MUS than for INT([2.7±1.4]vs.[2.1±1.5];p=0.011).Manipulating attentional focus did not alter muscle activation for a light-intensity isometric muscular endurance task,though MUS was reported as more positive and requiring less exertion to complete than INT.Using music can therefore be recommended during light-intensity isometric exercise based on the psychological benefits observed.