结合共沉淀法和球磨辅助下的高温固相法,合成层状氧化物正极材料Li[Ni_(0.6)Co_(0.2)Mn_(0.2-y)Mg_y]O_(2-z)F_z(0≤y≤0.12,0≤z≤0.08),探究F-Mg掺杂对LiNi_(0.6)Co_(0.2)Mn_(0.2)O_2材料的影响。与以往的研究相比,这种掺杂处理在首...结合共沉淀法和球磨辅助下的高温固相法,合成层状氧化物正极材料Li[Ni_(0.6)Co_(0.2)Mn_(0.2-y)Mg_y]O_(2-z)F_z(0≤y≤0.12,0≤z≤0.08),探究F-Mg掺杂对LiNi_(0.6)Co_(0.2)Mn_(0.2)O_2材料的影响。与以往的研究相比,这种掺杂处理在首次库仑效率和循环性能方面的电化学性能得到实质改善。在充放电倍率为0.2C和电压范围为2.8~4.4 V的条件下,Li[Ni_(0.6)Co_(0.2)Mn_(0.11)Mg_(0.09)]O_(1.96)F_(0.04)的首次放电比容量和库伦效率分别为189.7 m A·h/g和98.6%,100次循环后容量保持率为96.3%。电化学阻抗谱(EIS)结果表明,Mg-F掺杂降低了电荷转移电阻,从而提高了反应动力学,这是材料具有更高倍率性能的主要原因。由于Li[Ni_(0.6)Co_(0.2)Mn_(0.11)Mg_(0.09)]O_(1.96)F_(0.04)具有优异的电化学性能,被看作是很有前景的新型锂离子电池正极材料。展开更多
Deep convolutional neural networks (DCNNs) based methods recently keep setting new records on the tasks of predicting depth maps from monocular images. When dealing with video-based applications such as 2D (2-dimen...Deep convolutional neural networks (DCNNs) based methods recently keep setting new records on the tasks of predicting depth maps from monocular images. When dealing with video-based applications such as 2D (2-dimensional) to 3D (3-dimensional) video conversion, however, these approaches tend to produce temporally inconsistent depth maps, since their CNN models are optimized over single frames. In this paper, we address this problem by introducing a novel spatial-temporal conditional random fields (CRF) model into the DCNN architecture, which is able to enforce temporal consistency between depth map estimations over consecutive video frames. In our approach, temporally consistent superpixel (TSP) is first applied to an image sequence to establish the correspondence of targets in consecutive frames. A DCNN is then used to regress the depth value of each temporal superpixel, followed by a spatial-temporal CRF layer to model the relationship of the estimated depths in both spatial and temporal domains. The parameters in both DCNN and CRF models are jointly optimized with back propagation. Experimental results show that our approach not only is able to significantly enhance the temporal consistency of estimated depth maps over existing single-frame-based approaches, but also improves the depth estimation accuracy in terms of various evaluation metrics.展开更多
基金Project(1114022-15) supported by the Major Science and Technology Research Projects of Guangxi Province,China
文摘结合共沉淀法和球磨辅助下的高温固相法,合成层状氧化物正极材料Li[Ni_(0.6)Co_(0.2)Mn_(0.2-y)Mg_y]O_(2-z)F_z(0≤y≤0.12,0≤z≤0.08),探究F-Mg掺杂对LiNi_(0.6)Co_(0.2)Mn_(0.2)O_2材料的影响。与以往的研究相比,这种掺杂处理在首次库仑效率和循环性能方面的电化学性能得到实质改善。在充放电倍率为0.2C和电压范围为2.8~4.4 V的条件下,Li[Ni_(0.6)Co_(0.2)Mn_(0.11)Mg_(0.09)]O_(1.96)F_(0.04)的首次放电比容量和库伦效率分别为189.7 m A·h/g和98.6%,100次循环后容量保持率为96.3%。电化学阻抗谱(EIS)结果表明,Mg-F掺杂降低了电荷转移电阻,从而提高了反应动力学,这是材料具有更高倍率性能的主要原因。由于Li[Ni_(0.6)Co_(0.2)Mn_(0.11)Mg_(0.09)]O_(1.96)F_(0.04)具有优异的电化学性能,被看作是很有前景的新型锂离子电池正极材料。
基金This work is supported in part by the Natural Science Foundation of Zhejiang Province of China under Grant No. LQ17F030001, the National Natural Science Foundation of China under Grant No. U1609215, Qianjiang Talent Program of Zhejiang Province of China under Grant No. QJD1602021, the National Key Technology Research and Development Program of the Ministry of Science and Technology of China under Grant No. 2014BAK14B01, and Beihang University Virtual Reality Technology and System National Key Laboratory Open Project under Grant No. BUAA-VR-16KF-17.
文摘Deep convolutional neural networks (DCNNs) based methods recently keep setting new records on the tasks of predicting depth maps from monocular images. When dealing with video-based applications such as 2D (2-dimensional) to 3D (3-dimensional) video conversion, however, these approaches tend to produce temporally inconsistent depth maps, since their CNN models are optimized over single frames. In this paper, we address this problem by introducing a novel spatial-temporal conditional random fields (CRF) model into the DCNN architecture, which is able to enforce temporal consistency between depth map estimations over consecutive video frames. In our approach, temporally consistent superpixel (TSP) is first applied to an image sequence to establish the correspondence of targets in consecutive frames. A DCNN is then used to regress the depth value of each temporal superpixel, followed by a spatial-temporal CRF layer to model the relationship of the estimated depths in both spatial and temporal domains. The parameters in both DCNN and CRF models are jointly optimized with back propagation. Experimental results show that our approach not only is able to significantly enhance the temporal consistency of estimated depth maps over existing single-frame-based approaches, but also improves the depth estimation accuracy in terms of various evaluation metrics.