期刊文献+
共找到230篇文章
< 1 2 12 >
每页显示 20 50 100
Multimodal Sentiment Analysis Based on a Cross-Modal Multihead Attention Mechanism
1
作者 Lujuan Deng Boyi Liu Zuhe Li 《Computers, Materials & Continua》 SCIE EI 2024年第1期1157-1170,共14页
Multimodal sentiment analysis aims to understand people’s emotions and opinions from diverse data.Concate-nating or multiplying various modalities is a traditional multi-modal sentiment analysis fusion method.This fu... Multimodal sentiment analysis aims to understand people’s emotions and opinions from diverse data.Concate-nating or multiplying various modalities is a traditional multi-modal sentiment analysis fusion method.This fusion method does not utilize the correlation information between modalities.To solve this problem,this paper proposes amodel based on amulti-head attention mechanism.First,after preprocessing the original data.Then,the feature representation is converted into a sequence of word vectors and positional encoding is introduced to better understand the semantic and sequential information in the input sequence.Next,the input coding sequence is fed into the transformer model for further processing and learning.At the transformer layer,a cross-modal attention consisting of a pair of multi-head attention modules is employed to reflect the correlation between modalities.Finally,the processed results are input into the feedforward neural network to obtain the emotional output through the classification layer.Through the above processing flow,the model can capture semantic information and contextual relationships and achieve good results in various natural language processing tasks.Our model was tested on the CMU Multimodal Opinion Sentiment and Emotion Intensity(CMU-MOSEI)and Multimodal EmotionLines Dataset(MELD),achieving an accuracy of 82.04% and F1 parameters reached 80.59% on the former dataset. 展开更多
关键词 Emotion analysis deep learning cross-modal attention mechanism
下载PDF
Cross-Modal Consistency with Aesthetic Similarity for Multimodal False Information Detection
2
作者 Weijian Fan Ziwei Shi 《Computers, Materials & Continua》 SCIE EI 2024年第5期2723-2741,共19页
With the explosive growth of false information on social media platforms, the automatic detection of multimodalfalse information has received increasing attention. Recent research has significantly contributed to mult... With the explosive growth of false information on social media platforms, the automatic detection of multimodalfalse information has received increasing attention. Recent research has significantly contributed to multimodalinformation exchange and fusion, with many methods attempting to integrate unimodal features to generatemultimodal news representations. However, they still need to fully explore the hierarchical and complex semanticcorrelations between different modal contents, severely limiting their performance detecting multimodal falseinformation. This work proposes a two-stage detection framework for multimodal false information detection,called ASMFD, which is based on image aesthetic similarity to segment and explores the consistency andinconsistency features of images and texts. Specifically, we first use the Contrastive Language-Image Pre-training(CLIP) model to learn the relationship between text and images through label awareness and train an imageaesthetic attribute scorer using an aesthetic attribute dataset. Then, we calculate the aesthetic similarity betweenthe image and related images and use this similarity as a threshold to divide the multimodal correlation matrixinto consistency and inconsistencymatrices. Finally, the fusionmodule is designed to identify essential features fordetectingmultimodal false information. In extensive experiments on four datasets, the performance of the ASMFDis superior to state-of-the-art baseline methods. 展开更多
关键词 Social media false information detection image aesthetic assessment cross-modal consistency
下载PDF
A Multi-Level Circulant Cross-Modal Transformer for Multimodal Speech Emotion Recognition 被引量:1
3
作者 Peizhu Gong Jin Liu +3 位作者 Zhongdai Wu Bing Han YKenWang Huihua He 《Computers, Materials & Continua》 SCIE EI 2023年第2期4203-4220,共18页
Speech emotion recognition,as an important component of humancomputer interaction technology,has received increasing attention.Recent studies have treated emotion recognition of speech signals as a multimodal task,due... Speech emotion recognition,as an important component of humancomputer interaction technology,has received increasing attention.Recent studies have treated emotion recognition of speech signals as a multimodal task,due to its inclusion of the semantic features of two different modalities,i.e.,audio and text.However,existing methods often fail in effectively represent features and capture correlations.This paper presents a multi-level circulant cross-modal Transformer(MLCCT)formultimodal speech emotion recognition.The proposed model can be divided into three steps,feature extraction,interaction and fusion.Self-supervised embedding models are introduced for feature extraction,which give a more powerful representation of the original data than those using spectrograms or audio features such as Mel-frequency cepstral coefficients(MFCCs)and low-level descriptors(LLDs).In particular,MLCCT contains two types of feature interaction processes,where a bidirectional Long Short-term Memory(Bi-LSTM)with circulant interaction mechanism is proposed for low-level features,while a two-stream residual cross-modal Transformer block is appliedwhen high-level features are involved.Finally,we choose self-attention blocks for fusion and a fully connected layer to make predictions.To evaluate the performance of our proposed model,comprehensive experiments are conducted on three widely used benchmark datasets including IEMOCAP,MELD and CMU-MOSEI.The competitive results verify the effectiveness of our approach. 展开更多
关键词 Speech emotion recognition self-supervised embedding model cross-modal transformer self-attention
下载PDF
TECMH:Transformer-Based Cross-Modal Hashing For Fine-Grained Image-Text Retrieval
4
作者 Qiqi Li Longfei Ma +2 位作者 Zheng Jiang Mingyong Li Bo Jin 《Computers, Materials & Continua》 SCIE EI 2023年第5期3713-3728,共16页
In recent years,cross-modal hash retrieval has become a popular research field because of its advantages of high efficiency and low storage.Cross-modal retrieval technology can be applied to search engines,crossmodalm... In recent years,cross-modal hash retrieval has become a popular research field because of its advantages of high efficiency and low storage.Cross-modal retrieval technology can be applied to search engines,crossmodalmedical processing,etc.The existing main method is to use amulti-label matching paradigm to finish the retrieval tasks.However,such methods do not use fine-grained information in the multi-modal data,which may lead to suboptimal results.To avoid cross-modal matching turning into label matching,this paper proposes an end-to-end fine-grained cross-modal hash retrieval method,which can focus more on the fine-grained semantic information of multi-modal data.First,the method refines the image features and no longer uses multiple labels to represent text features but uses BERT for processing.Second,this method uses the inference capabilities of the transformer encoder to generate global fine-grained features.Finally,in order to better judge the effect of the fine-grained model,this paper uses the datasets in the image text matching field instead of the traditional label-matching datasets.This article experiment on Microsoft COCO(MS-COCO)and Flickr30K datasets and compare it with the previous classicalmethods.The experimental results show that this method can obtain more advanced results in the cross-modal hash retrieval field. 展开更多
关键词 Deep learning cross-modal retrieval hash learning TRANSFORMER
下载PDF
ViT2CMH:Vision Transformer Cross-Modal Hashing for Fine-Grained Vision-Text Retrieval
5
作者 Mingyong Li Qiqi Li +1 位作者 Zheng Jiang Yan Ma 《Computer Systems Science & Engineering》 SCIE EI 2023年第8期1401-1414,共14页
In recent years,the development of deep learning has further improved hash retrieval technology.Most of the existing hashing methods currently use Convolutional Neural Networks(CNNs)and Recurrent Neural Networks(RNNs)... In recent years,the development of deep learning has further improved hash retrieval technology.Most of the existing hashing methods currently use Convolutional Neural Networks(CNNs)and Recurrent Neural Networks(RNNs)to process image and text information,respectively.This makes images or texts subject to local constraints,and inherent label matching cannot capture finegrained information,often leading to suboptimal results.Driven by the development of the transformer model,we propose a framework called ViT2CMH mainly based on the Vision Transformer to handle deep Cross-modal Hashing tasks rather than CNNs or RNNs.Specifically,we use a BERT network to extract text features and use the vision transformer as the image network of the model.Finally,the features are transformed into hash codes for efficient and fast retrieval.We conduct extensive experiments on Microsoft COCO(MS-COCO)and Flickr30K,comparing with baselines of some hashing methods and image-text matching methods,showing that our method has better performance. 展开更多
关键词 Hash learning cross-modal retrieval fine-grained matching TRANSFORMER
下载PDF
Adequate alignment and interaction for cross-modal retrieval
6
作者 Mingkang WANG Min MENG +1 位作者 Jigang LIU Jigang WU 《Virtual Reality & Intelligent Hardware》 EI 2023年第6期509-522,共14页
Background Cross-modal retrieval has attracted widespread attention in many cross-media similarity search applications,particularly image-text retrieval in the fields of computer vision and natural language processing... Background Cross-modal retrieval has attracted widespread attention in many cross-media similarity search applications,particularly image-text retrieval in the fields of computer vision and natural language processing.Recently,visual and semantic embedding(VSE)learning has shown promising improvements in image text retrieval tasks.Most existing VSE models employ two unrelated encoders to extract features and then use complex methods to contextualize and aggregate these features into holistic embeddings.Despite recent advances,existing approaches still suffer from two limitations:(1)without considering intermediate interactions and adequate alignment between different modalities,these models cannot guarantee the discriminative ability of representations;and(2)existing feature aggregators are susceptible to certain noisy regions,which may lead to unreasonable pooling coefficients and affect the quality of the final aggregated features.Methods To address these challenges,we propose a novel cross-modal retrieval model containing a well-designed alignment module and a novel multimodal fusion encoder that aims to learn the adequate alignment and interaction of aggregated features to effectively bridge the modality gap.Results Experiments on the Microsoft COCO and Flickr30k datasets demonstrated the superiority of our model over state-of-the-art methods. 展开更多
关键词 cross-modal retrieval Visual semantic embedding Feature aggregation Transformer
下载PDF
Review of Visible-Infrared Cross-Modality Person Re-Identification
7
作者 Yinyin Zhang 《Journal of New Media》 2023年第1期23-31,共9页
Person re-identification(ReID)is a sub-problem under image retrieval.It is a technology that uses computer vision to identify a specific pedestrian in a collection of pictures or videos.The pedestrian image under cros... Person re-identification(ReID)is a sub-problem under image retrieval.It is a technology that uses computer vision to identify a specific pedestrian in a collection of pictures or videos.The pedestrian image under cross-device is taken from a monitored pedestrian image.At present,most ReID methods deal with the matching between visible and visible images,but with the continuous improvement of security monitoring system,more and more infrared cameras are used to monitor at night or in dim light.Due to the image differences between infrared camera and RGB camera,there is a huge visual difference between cross-modality images,so the traditional ReID method is difficult to apply in this scene.In view of this situation,studying the pedestrian matching between visible and infrared modalities is particularly crucial.Visible-infrared person re-identification(VI-ReID)was first proposed in 2017,and then attracted more and more attention,and many advanced methods emerged. 展开更多
关键词 Person re-identification cross-modality
下载PDF
Use of sensory substitution devices as a model system for investigating cross-modal neuroplasticity in humans 被引量:1
8
作者 Amy C.Nau Matthew C.Murphy Kevin C.Chan 《Neural Regeneration Research》 SCIE CAS CSCD 2015年第11期1717-1719,共3页
Blindness provides an unparalleled opportunity to study plasticity of the nervous system in humans.Seminal work in this area examined the often dramatic modifications to the visual cortex that result when visual input... Blindness provides an unparalleled opportunity to study plasticity of the nervous system in humans.Seminal work in this area examined the often dramatic modifications to the visual cortex that result when visual input is completely absent from birth or very early in life(Kupers and Ptito,2014).More recent studies explored what happens to the visual pathways in the context of acquired blindness.This is particularly relevant as the majority of diseases that cause vision loss occur in the elderly. 展开更多
关键词 Use of sensory substitution devices as a model system for investigating cross-modal neuroplasticity in humans BOLD
下载PDF
Cross-Modal Hashing Retrieval Based on Deep Residual Network
9
作者 Zhiyi Li Xiaomian Xu +1 位作者 Du Zhang Peng Zhang 《Computer Systems Science & Engineering》 SCIE EI 2021年第2期383-405,共23页
In the era of big data rich inWe Media,the single mode retrieval system has been unable to meet people’s demand for information retrieval.This paper proposes a new solution to the problem of feature extraction and un... In the era of big data rich inWe Media,the single mode retrieval system has been unable to meet people’s demand for information retrieval.This paper proposes a new solution to the problem of feature extraction and unified mapping of different modes:A Cross-Modal Hashing retrieval algorithm based on Deep Residual Network(CMHR-DRN).The model construction is divided into two stages:The first stage is the feature extraction of different modal data,including the use of Deep Residual Network(DRN)to extract the image features,using the method of combining TF-IDF with the full connection network to extract the text features,and the obtained image and text features used as the input of the second stage.In the second stage,the image and text features are mapped into Hash functions by supervised learning,and the image and text features are mapped to the common binary Hamming space.In the process of mapping,the distance measurement of the original distance measurement and the common feature space are kept unchanged as far as possible to improve the accuracy of Cross-Modal Retrieval.In training the model,adaptive moment estimation(Adam)is used to calculate the adaptive learning rate of each parameter,and the stochastic gradient descent(SGD)is calculated to obtain the minimum loss function.The whole training process is completed on Caffe deep learning framework.Experiments show that the proposed algorithm CMHR-DRN based on Deep Residual Network has better retrieval performance and stronger advantages than other Cross-Modal algorithms CMFH,CMDN and CMSSH. 展开更多
关键词 Deep residual network cross-modal retrieval HASHING cross-modal hashing retrieval based on deep residual network
下载PDF
Mechanism of Cross-modal Information Influencing Taste
10
作者 Pei LIANG Jia-yu JIANG +2 位作者 Qiang LIU Su-lin ZHANG Hua-jing YANG 《Current Medical Science》 SCIE CAS 2020年第3期474-479,共6页
Studies on the integration of cross-modal information with taste perception has been mostly limited to uni-modal level.The cross-modal sensory interaction and the neural network of information processing and its contr... Studies on the integration of cross-modal information with taste perception has been mostly limited to uni-modal level.The cross-modal sensory interaction and the neural network of information processing and its control were not fully explored and the mechanisms remain poorly understood.This mini review investigated the impact of uni-modal and multi-modal information on the taste perception,from the perspective of cognitive status,such as emotion,expectation and attention,and discussed the hypothesis that the cognitive status is the key step for visual sense to exert influence on taste.This work may help researchers better understand the mechanism of cross-modal information processing and further develop neutrally-based artificial intelligent(AI)system. 展开更多
关键词 cross-modal information integration cognitive status taste perception
下载PDF
中央与外周视野的IOR时间进程及其年龄效应比较 被引量:1
11
作者 周冉 段锦云 《心理科学》 CSSCI CSCD 北大核心 2010年第4期883-886,共4页
采用线索-靶子范式的双线索实验程序,在简单觉察任务中,大范围操纵SOAs,考察中央视野和外周视野的IOR在时间进程特征上的差异,年轻人与老年人不同视野中IOR时间进程的异同。结果发现,中央视野IOR的量值小于外周视野,且消失得也更早,反... 采用线索-靶子范式的双线索实验程序,在简单觉察任务中,大范围操纵SOAs,考察中央视野和外周视野的IOR在时间进程特征上的差异,年轻人与老年人不同视野中IOR时间进程的异同。结果发现,中央视野IOR的量值小于外周视野,且消失得也更早,反映了中央和外周视野存在不同的注意调控机制;人和老年人在中央和外周视野上IOR的时间进程没有显著差异,提示在不同视野中IOR的消失时间不存在年龄差异。 展开更多
关键词 ior 时间进程 年龄效应 视野
下载PDF
三维空间深度位置上情绪面孔对返回抑制的影响
12
作者 钱程 赵越 +2 位作者 牛溪溪 顾佳灿 王爱君 《心理与行为研究》 北大核心 2024年第1期8-14,共7页
通过虚拟现实技术构建虚拟三维场景,将Posner线索化范式应用到三维空间,通过两个实验操纵了目标深度、线索有效性以及情绪面孔的效价,考察了三维空间深度位置上情绪面孔如何影响视觉空间返回抑制。结果发现,(1)情绪面孔作为目标时,目标... 通过虚拟现实技术构建虚拟三维场景,将Posner线索化范式应用到三维空间,通过两个实验操纵了目标深度、线索有效性以及情绪面孔的效价,考察了三维空间深度位置上情绪面孔如何影响视觉空间返回抑制。结果发现,(1)情绪面孔作为目标时,目标出现在近处空间,情绪面孔效价与返回抑制存在交互作用,负性面孔下的返回抑制减小;(2)情绪面孔作为线索时,目标出现在远处空间,情绪面孔效价与返回抑制存在交互作用,负性面孔下的返回抑制增大。研究表明,三维空间中情绪面孔能够影响返回抑制的大小,且这种影响在远近空间下存在差异。 展开更多
关键词 三维空间 返回抑制 情绪面孔 注意定向
下载PDF
Attention-Enhanced Voice Portrait Model Using Generative Adversarial Network
13
作者 Jingyi Mao Yuchen Zhou +3 位作者 YifanWang Junyu Li Ziqing Liu Fanliang Bu 《Computers, Materials & Continua》 SCIE EI 2024年第4期837-855,共19页
Voice portrait technology has explored and established the relationship between speakers’ voices and their facialfeatures, aiming to generate corresponding facial characteristics by providing the voice of an unknown ... Voice portrait technology has explored and established the relationship between speakers’ voices and their facialfeatures, aiming to generate corresponding facial characteristics by providing the voice of an unknown speaker.Due to its powerful advantages in image generation, Generative Adversarial Networks (GANs) have now beenwidely applied across various fields. The existing Voice2Face methods for voice portraits are primarily based onGANs trained on voice-face paired datasets. However, voice portrait models solely constructed on GANs facelimitations in image generation quality and struggle to maintain facial similarity. Additionally, the training processis relatively unstable, thereby affecting the overall generative performance of the model. To overcome the abovechallenges,wepropose a novel deepGenerativeAdversarialNetworkmodel for audio-visual synthesis, namedAVPGAN(Attention-enhanced Voice Portrait Model using Generative Adversarial Network). This model is based ona convolutional attention mechanism and is capable of generating corresponding facial images from the voice ofan unknown speaker. Firstly, to address the issue of training instability, we integrate convolutional neural networkswith deep GANs. In the network architecture, we apply spectral normalization to constrain the variation of thediscriminator, preventing issues such as mode collapse. Secondly, to enhance the model’s ability to extract relevantfeatures between the two modalities, we propose a voice portrait model based on convolutional attention. Thismodel learns the mapping relationship between voice and facial features in a common space from both channeland spatial dimensions independently. Thirdly, to enhance the quality of generated faces, we have incorporated adegradation removal module and utilized pretrained facial GANs as facial priors to repair and enhance the clarityof the generated facial images. Experimental results demonstrate that our AVP-GAN achieved a cosine similarity of0.511, outperforming the performance of our comparison model, and effectively achieved the generation of highqualityfacial images corresponding to a speaker’s voice. 展开更多
关键词 cross-modal generation GANs voice portrait technology face synthesis
下载PDF
Fake News Detection Based on Text-Modal Dominance and Fusing Multiple Multi-Model Clues
14
作者 Li fang Fu Huanxin Peng +1 位作者 Changjin Ma Yuhan Liu 《Computers, Materials & Continua》 SCIE EI 2024年第3期4399-4416,共18页
In recent years,how to efficiently and accurately identify multi-model fake news has become more challenging.First,multi-model data provides more evidence but not all are equally important.Secondly,social structure in... In recent years,how to efficiently and accurately identify multi-model fake news has become more challenging.First,multi-model data provides more evidence but not all are equally important.Secondly,social structure information has proven to be effective in fake news detection and how to combine it while reducing the noise information is critical.Unfortunately,existing approaches fail to handle these problems.This paper proposes a multi-model fake news detection framework based on Tex-modal Dominance and fusing Multiple Multi-model Cues(TD-MMC),which utilizes three valuable multi-model clues:text-model importance,text-image complementary,and text-image inconsistency.TD-MMC is dominated by textural content and assisted by image information while using social network information to enhance text representation.To reduce the irrelevant social structure’s information interference,we use a unidirectional cross-modal attention mechanism to selectively learn the social structure’s features.A cross-modal attention mechanism is adopted to obtain text-image cross-modal features while retaining textual features to reduce the loss of important information.In addition,TD-MMC employs a new multi-model loss to improve the model’s generalization ability.Extensive experiments have been conducted on two public real-world English and Chinese datasets,and the results show that our proposed model outperforms the state-of-the-art methods on classification evaluation metrics. 展开更多
关键词 Fake news detection cross-modal attention mechanism multi-modal fusion social network transfer learning
下载PDF
谈CORBA中的IOR文件 被引量:2
15
作者 季显文 唐晓娟 《佳木斯大学学报(自然科学版)》 CAS 2006年第2期217-219,共3页
对通用对象请求代理体系结构中的可互操作对象引用进行了分析,讨论了可互操作对象引用的数据结构,不同厂商的ORB之间的通讯,以及客户机连接到服务器的最主要部分.
关键词 CORBA ior LINUX
下载PDF
ROI+IOR=从消费者角度看广告创意 被引量:1
16
作者 佟亚云 王育晓 《山东工业技术》 2015年第4期229-230,共2页
广告创意与品牌或产品之间的关联性不是广告创意的意义,让消费者清楚地明白其中的关联性才是意义;创意的原创性不是广告作品的吸引力和生命力,让消费者乐于接受的原创性才是吸引力和生命力;广告创意的震撼力不是谈传播效果保障,让消费... 广告创意与品牌或产品之间的关联性不是广告创意的意义,让消费者清楚地明白其中的关联性才是意义;创意的原创性不是广告作品的吸引力和生命力,让消费者乐于接受的原创性才是吸引力和生命力;广告创意的震撼力不是谈传播效果保障,让消费者记住震撼力曾产生的购买欲才是效果的保障。 展开更多
关键词 ROI ior 广告创意
下载PDF
Effect and regulation ofα-dstroglycan glyco⁃sylation on chronic social defeat induced depressive-like behaviors of mice
17
作者 LI Yu-ke WANG Fang 《中国药理学与毒理学杂志》 CAS 北大核心 2021年第9期693-694,共2页
OBJECTIVEα-Dstroglycan(α-DG)is a predominant component in the dystrophin-glycoprotein complex(DGC)and a recently char⁃acterized receptor for several extracellular matrix components with high affinity.Recent research... OBJECTIVEα-Dstroglycan(α-DG)is a predominant component in the dystrophin-glycoprotein complex(DGC)and a recently char⁃acterized receptor for several extracellular matrix components with high affinity.Recent research⁃es have reported that hypoglycosylation ofα-DG is associated with the pathophysiology of diseas⁃es,especially muscular dystrophy,but little is known about major depressive disorder(MDD).Like-acetylglucosaminyl transferase(Large)is a key enzyme for glycosylation ofα-DG,which mainly modifies two points in the middle domain ofα-DG:Thr-317 and Thr-319.Glycosylatedα-DG(GLY-α-DG)can bind with high affinity to extracellular matrix(ECM)molecules that con⁃tain laminin globular(LG)domains,including per⁃lecan,agrin and neurexin.Agrin is mainly derived from neurons rather than glial cells.In cultured hippocampal neurons,it was found that agrin could regulate the homeostatic plasticity of inhibi⁃tory neurons by acting on GLY-α-DG.Mdx mice are transgenetic models for the investigation of Duchenne muscular dystrophy.Many studies have shown that the expression of GLY-α-DG in the peripheral and brain tissues of Mdx mice is significantly down-regulated.Mdx mice show cognitive impairment and high levels of anxiety.In this study,we employed chronic social defeat stress(CSDS)to establish an animal model of depression and detected the expression of GLY-α-DG among the brain areas associated with the pathophysiology of depression.METHODS So⁃cial interaction test(SIT)and sucrose preference test(SPT)were used to evaluate depressive-like behavior.Open field(OF)and elevated plus maze(EPM)test were used to determine the anxiety-like behavior of Mdx mice.Novelty-sup⁃pressed feeding test(NSFT)forced swim test(FST)and tail suspension test(TST)were used to detect the depressive-like behavior of Mdx mice.Novel object recognition test(NOR)was applied to evaluate the cognition of Mdx mice.Subthreshold social defeat stress was used to explore the susceptibility to stress in Mdx mice.Stereotactic infusion of agrin into the ventral hippocampus(vHip),FST and TST were used to investigate the antidepressant effects of agrin.Adeno-associated virus(AAV)-mediated overex⁃pression techniques,behavior tests and whole-cell path-clamp technique were conducted to determine the impact of Large overexpression on CSDS susceptible mice.RESULTS The expres⁃sion ofα-DG and GLY-α-DG were significantly decreased in the vHip of CSDS susceptible mice.Mdx mice showed decreased expression of GLY-α-DG and increased anxiety-like behav⁃iors.Mdx mice displayed some depressive-like behaviors,and the susceptibility to stress was significantly increased.Downregulation of the expressionα-DG in the vHip by lentivirus increased the susceptibility to stress.Administra⁃tion of agrin to CSDS susceptible mice exerted antidepressant effects,and this effect could par⁃tially sustain for a week.The expression of Large was decreased in vHip.Overexpression of Large through AAV-Large reversed the depressive-like behaviors and restored the decreased frequency and amplitude of mIPSC.CONCLUSION GLY-α-DG and its glycosylase are significantly decreased in CSDS susceptible mice.Adminis⁃tration of agrin and overexpression of Large displays antidepressant effect,which may be related to its promotion of inhibitory synaptic transmission. 展开更多
关键词 α-dstroglycan depressive-like behav⁃iors social defeat
下载PDF
在克罗地亚油田实施水-CO_2交替注入IOR先导性试验项目的初步结果
18
《大庆石油地质与开发》 CAS CSCD 北大核心 2008年第1期129-129,共1页
关键词 试验项目 克罗地亚 先导性 交替注入 油田 注水 ior 提高采收率方法
下载PDF
印度北Cambay盆地Mehsana断块油田的IOR策略
19
作者 金佩强 《大庆石油地质与开发》 CAS CSCD 北大核心 2008年第3期14-14,共1页
关键词 断块油田 盆地 ior 油藏非均质性 EOR技术 印度 油藏管理 采油量
下载PDF
lnlormatlon Ior ontrlDutors
20
《Transactions of Nonferrous Metals Society of China》 SCIE EI CAS CSCD 2013年第3期F0003-F0003,共1页
关键词 中国有色金属学会 ior 科学与技术 INSPEC 提取冶金 金属材料 金属加工 物理冶金
下载PDF
上一页 1 2 12 下一页 到第
使用帮助 返回顶部