期刊文献+
共找到5篇文章
< 1 >
每页显示 20 50 100
不同波长的蓝光对人视网膜色素上皮细胞的影响 被引量:5
1
作者 鞠雅晗 汤志敏 +3 位作者 王宇瑶 代小婵 罗敏 谷平 《国际眼科杂志》 CAS 北大核心 2020年第8期1315-1319,共5页
目的:探讨不同波长的蓝光对人视网膜色素上皮细胞(RPE)的影响。方法:将体外培养的ARPE-19细胞随机分为对照组、447nm蓝光组、456nm蓝光组、468nm蓝光组,对照组细胞于常规条件下培养,蓝光组细胞使用光强为200Lx的OLED蓝光背光源照射72h,... 目的:探讨不同波长的蓝光对人视网膜色素上皮细胞(RPE)的影响。方法:将体外培养的ARPE-19细胞随机分为对照组、447nm蓝光组、456nm蓝光组、468nm蓝光组,对照组细胞于常规条件下培养,蓝光组细胞使用光强为200Lx的OLED蓝光背光源照射72h,利用细胞活/死染色实验、CCK-8实验、Real-time PCR等方法比较不同波长的蓝光对细胞形态、细胞活性、增殖能力及视循环功能指标和炎症指标mRNA表达的影响。结果:蓝光照射后,ARPE-19细胞的形态发生变化,细胞融合减少。蓝光波长越短,对细胞增殖抑制作用越明显,细胞内增殖标志物Ki-67 mRNA表达越少,视循环功能指标卵磷脂视黄醇酰基转移酶(LRAT)、视黄醛结合蛋白(CRALBP)、视黄醛脱氢酶(RDH)、光受体视黄醇类结合蛋白(IRBP)mRNA表达下调越明显,细胞内炎症因子单核细胞趋化因子(MCP-1)、白介素-6(IL-6)mRNA表达水平上调越明显。结论:不同波长蓝光对RPE细胞均有损害作用,且蓝光波长越短,其损害作用越大。 展开更多
关键词 蓝光 视网膜色素上皮细胞 增殖 视循环 炎症
下载PDF
Skyway:Accelerate Graph Applications with a Dual-Path Architecture and Fine-Grained Data Management
2
作者 Mo Zou Ming-Zhe Zhang +4 位作者 Ru-Jia Wang Xian-He Sun Xiao-Chun Ye Dong-Rui Fan zhi-min tang 《Journal of Computer Science & Technology》 SCIE EI CSCD 2024年第4期871-894,共24页
Graph processing is a vital component of many AI and big data applications.However,due to its poor locality and complex data access patterns,graph processing is also a known performance killer of AI and big data appli... Graph processing is a vital component of many AI and big data applications.However,due to its poor locality and complex data access patterns,graph processing is also a known performance killer of AI and big data applications.In this work,we propose to enhance graph processing applications by leveraging fine-grained memory access patterns with a dual-path architecture on top of existing software-based graph optimizations.We first identify that memory accesses to the offset,edge,and state array have distinct locality and impact on performance.We then introduce the Skyway architecture,which consists of two primary components:1)a dedicated direct data path between the core and memory to transfer state array elements efficiently,and 2)a data-type aware fine-grained memory-side row buffer hardware for both the newly designed direct data path and the regular memory hierarchy data path.The proposed Skyway architecture is able to improve the overall performance by reducing the memory access interference and improving data access efficiency with a minimal overhead.We evaluate Skyway on a set of diverse algorithms using large real-world graphs.On a simulated fourcore system,Skyway improves the performance by 23%on average over the best-performing graph-specialized hardware optimizations. 展开更多
关键词 graph application computer architecture memory hierarchy
原文传递
A Non-Stop Double Buffering Mechanism for Dataflow Architecture 被引量:4
3
作者 Xu Tan Xiao-Wei Shen +6 位作者 Xiao-Chun Ye Da Wang Dong-Rui Fan Lunkai Zhang Wen-Ming Li zhi-min Zhang zhi-min tang 《Journal of Computer Science & Technology》 SCIE EI CSCD 2018年第1期145-157,共13页
Double buffering is an effective mechanism to hide the latency of data transfers between on-chip and off-chip memory. However, in dataflow architecture, the swapping of two buffers during the execution of many tiles d... Double buffering is an effective mechanism to hide the latency of data transfers between on-chip and off-chip memory. However, in dataflow architecture, the swapping of two buffers during the execution of many tiles decreases the performance because of repetitive filling and draining of the dataflow accelerator. In this work, we propose a non-stop double buffering mechanism for dataflow architecture. The proposed non-stop mechanism assigns tiles to the processing element array without stopping the execution of processing elements through optimizing control logic in dataflow architecture. Moreover, we propose a work-flow program to cooperate with the non-stop double buffering mechanism. After optimizations both on control logic and on work-flow program, the filling and draining of the array needs to be done only once across the execution of all tiles belonging to the same dataflow graph. Experimental results show that the proposed double buffering mechanism for dataftow architecture achieves a 16.2% average efficiency improvement over that without the optimization. 展开更多
关键词 non-stop double buffering dataflow architecture high-performance computing
原文传递
A Pipelining Loop Optimization Method for Dataflow Architecture 被引量:2
4
作者 Xu Tan Xiao-Chun Ye +6 位作者 Xiao-Wei Shen Yuan-Chao Xu Da Wang Lunkai Zhang Wen-Ming Li Dong-Rui Fan zhi-min tang 《Journal of Computer Science & Technology》 SCIE EI CSCD 2018年第1期116-130,共15页
With the coming of exascale supercomputing era, power efficiency has become the most important obstacle to build an exascale system. Dataflow architecture has native advantage in achieving high power efficiency for sc... With the coming of exascale supercomputing era, power efficiency has become the most important obstacle to build an exascale system. Dataflow architecture has native advantage in achieving high power efficiency for scientific applications. However, the state-of-the-art dataflow architectures fail to exploit high parallelism for loop processing. To address this issue, we propose a pipelining loop optimization method (PLO), which makes iterations in loops flow in the processing element (PE) array of dataflow accelerator. This method consists of two techniques, architecture-assisted hardware iteration and instruction-assisted software iteration. In hardware iteration execution model, an on-chip loop controller is designed to generate loop indexes, reducing the complexity of computing kernel and laying a good f(mndation for pipelining execution. In software iteration execution model, additional loop instructions are presented to solve the iteration dependency problem. Via these two techniques, the average number of instructions ready to execute per cycle is increased to keep floating-point unit busy. Simulation results show that our proposed method outperforms static and dynamic loop execution model in floating-point efficiency by 2.45x and 1.1x on average, respectively, while the hardware cost of these two techniques is acceptable. 展开更多
关键词 dataflow model control-flow model loop optimization exascale computing scientific application
原文传递
Accelerating Data Transfer in Dataflow Architectures Through a Look-Ahead Acknowledgment Mechanism
5
作者 Yu-Jing Feng De-Jian Li +6 位作者 Xu Tan Xiao-Chun Ye Dong-Rui Fan Wen-Ming Li Da Wang Hao Zhang zhi-min tang 《Journal of Computer Science & Technology》 SCIE EI CSCD 2022年第4期942-959,共18页
The dataflow architecture,which is characterized by a lack of a redundant unified control logic,has been shown to have an advantage over the control-flow architecture as it improves the computational performance and p... The dataflow architecture,which is characterized by a lack of a redundant unified control logic,has been shown to have an advantage over the control-flow architecture as it improves the computational performance and power efficiency,especially of applications used in high-performance computing(HPC).Importantly,the high computational efficiency of systems using the dataflow architecture is achieved by allowing program kernels to be activated in a simultaneous manner.Therefore,a proper acknowledgment mechanism is required to distinguish the data that logically belongs to different contexts.Possible solutions include the tagged-token matching mechanism in which the data is sent before acknowledgments are received but retried after rejection,or a handshake mechanism in which the data is only sent after acknowledgments are received.However,these mechanisms are characterized by both inefficient data transfer and increased area cost.Good performance of the dataflow architecture depends on the efficiency of data transfer.In order to optimize the efficiency of data transfer in existing dataflow architectures with a minimal increase in area and power cost,we propose a Look-Ahead Acknowledgment(LAA)mechanism.LAA accelerates the execution flow by speculatively acknowledging ahead without penalties.Our simulation analysis based on a handshake mechanism shows that our LAA increases the average utilization of computational units by 23.9%,with a reduction in the average execution time by 17.4%and an increase in the average power efficiency of dataflow processors by 22.4%.Crucially,our novel approach results in a relatively small increase in the area and power consumption of the on-chip logic of less than 0.9%.In conclusion,the evaluation results suggest that Look-Ahead Acknowledgment is an effective improvement for data transfer in existing dataflow architectures. 展开更多
关键词 dataflow model control-flow model high-performance computing application data transfer power efficiency
原文传递
上一页 1 下一页 到第
使用帮助 返回顶部