By using "pillaring" strategy, a new Co(Ⅱ)-MOF, [Co_2(abtc)(bimb)_2]·2H_2O(1, H_4abtc = 3,3?,5,5?-azobenzenetetracarboxylic acid, bimb = 4,4?-bis(imidazole-1-ylmethyl)biphenyl), has been solvother...By using "pillaring" strategy, a new Co(Ⅱ)-MOF, [Co_2(abtc)(bimb)_2]·2H_2O(1, H_4abtc = 3,3?,5,5?-azobenzenetetracarboxylic acid, bimb = 4,4?-bis(imidazole-1-ylmethyl)biphenyl), has been solvothermally synthesized and structurally characterized. The structural determination revealed that 1 features a 3D pillar-layered framework with(4,8)-connected {4^4.6^2}{4^8.6^(20)} topology based on dinuclear Co(Ⅱ)-SBUs. The magnetic investigation shows that the dominant antiferromagnetic coupling is observed in compound 1.展开更多
针对词向量语义信息不完整以及文本特征抽取时的一词多义问题,提出基于BERT(Bidirectional Encoder Representation from Transformer)的两次注意力加权算法(TARE)。首先,在词向量编码阶段,通过构建Q、K、V矩阵使用自注意力机制动态编...针对词向量语义信息不完整以及文本特征抽取时的一词多义问题,提出基于BERT(Bidirectional Encoder Representation from Transformer)的两次注意力加权算法(TARE)。首先,在词向量编码阶段,通过构建Q、K、V矩阵使用自注意力机制动态编码算法,为当前词的词向量捕获文本前后词语义信息;其次,在模型输出句子级特征向量后,利用定位信息符提取全连接层对应参数,构建关系注意力矩阵;最后,运用句子级注意力机制算法为每个句子级特征向量添加不同的注意力分数,提高句子级特征的抗噪能力。实验结果表明:在NYT-10m数据集上,与基于对比学习框架的CIL(Contrastive Instance Learning)算法相比,TARE的F1值提升了4.0个百分点,按置信度降序排列后前100、200和300条数据精准率Precision@N的平均值(P@M)提升了11.3个百分点;在NYT-10d数据集上,与基于注意力机制的PCNN-ATT(Piecewise Convolutional Neural Network algorithm based on ATTention mechanism)算法相比,精准率与召回率曲线下的面积(AUC)提升了4.8个百分点,P@M值提升了2.1个百分点。在主流的远程监督关系抽取(DSER)任务中,TARE有效地提升了模型对数据特征的学习能力。展开更多
基金Supported by the National Natural Science Foundation of China(Nos.21201109,21373122 and 21301106)
文摘By using "pillaring" strategy, a new Co(Ⅱ)-MOF, [Co_2(abtc)(bimb)_2]·2H_2O(1, H_4abtc = 3,3?,5,5?-azobenzenetetracarboxylic acid, bimb = 4,4?-bis(imidazole-1-ylmethyl)biphenyl), has been solvothermally synthesized and structurally characterized. The structural determination revealed that 1 features a 3D pillar-layered framework with(4,8)-connected {4^4.6^2}{4^8.6^(20)} topology based on dinuclear Co(Ⅱ)-SBUs. The magnetic investigation shows that the dominant antiferromagnetic coupling is observed in compound 1.
文摘针对词向量语义信息不完整以及文本特征抽取时的一词多义问题,提出基于BERT(Bidirectional Encoder Representation from Transformer)的两次注意力加权算法(TARE)。首先,在词向量编码阶段,通过构建Q、K、V矩阵使用自注意力机制动态编码算法,为当前词的词向量捕获文本前后词语义信息;其次,在模型输出句子级特征向量后,利用定位信息符提取全连接层对应参数,构建关系注意力矩阵;最后,运用句子级注意力机制算法为每个句子级特征向量添加不同的注意力分数,提高句子级特征的抗噪能力。实验结果表明:在NYT-10m数据集上,与基于对比学习框架的CIL(Contrastive Instance Learning)算法相比,TARE的F1值提升了4.0个百分点,按置信度降序排列后前100、200和300条数据精准率Precision@N的平均值(P@M)提升了11.3个百分点;在NYT-10d数据集上,与基于注意力机制的PCNN-ATT(Piecewise Convolutional Neural Network algorithm based on ATTention mechanism)算法相比,精准率与召回率曲线下的面积(AUC)提升了4.8个百分点,P@M值提升了2.1个百分点。在主流的远程监督关系抽取(DSER)任务中,TARE有效地提升了模型对数据特征的学习能力。