期刊文献+
共找到5篇文章
< 1 >
每页显示 20 50 100
缺血性脑卒中后认知障碍患者血清肺腺癌转移相关转录因子-1及硫氧还蛋白互作蛋白水平的变化及意义 被引量:4
1
作者 谭双 李相华 +1 位作者 刘春芹 李永秋 《中国现代医学杂志》 CAS 北大核心 2022年第13期26-31,共6页
目的 分析缺血性脑卒中后认知障碍患者血清肺腺癌转移相关转录因子1(MALAT1)及硫氧还蛋白互作蛋白(TXNIP)水平变化及意义。方法 本研究为前瞻性研究。选取2019年2月—2021年4月就诊于唐山市工人医院的150例缺血性脑卒中患者,根据蒙特利... 目的 分析缺血性脑卒中后认知障碍患者血清肺腺癌转移相关转录因子1(MALAT1)及硫氧还蛋白互作蛋白(TXNIP)水平变化及意义。方法 本研究为前瞻性研究。选取2019年2月—2021年4月就诊于唐山市工人医院的150例缺血性脑卒中患者,根据蒙特利尔认知评估量表(MoCA)评分将患者分为研究组(缺血性脑卒中后认知障碍)和对照组(缺血性脑卒中未伴有认知障碍)。对比两组血清MALAT1水平、TXNIP水平、MoCA评分、磁共振波谱水平;采用Pearson法分析血清MALAT1、TXNIP水平与MoCA评分、磁共振波谱水平的相关性;绘制ROC曲线分析血清MALAT1、TXNIP单独及联合预测认知障碍发生的价值。结果 150例缺血性脑卒中患者中,有58例为缺血性脑卒中认知障碍,占比38.67%;研究组血清MALAT1、TXNIP水平高于对照组,MoCA量表各维度及总评分低于对照组(P <0.05);研究组NAA1/NAA2低于对照组,Cho1/Cho2、Lac1/Cr2高于对照组(P <0.05);缺血性脑卒中后患者血清MALAT1、TXNIP水平与MoCA总分(r=-0.623和-0.512,均P=0.000)、NAA1/NAA2(r=-0.459和-0.413,均P=0.000)呈负相关,与Cho1/Cho2(r=0.569和0.496,均P=0.000)、Lac1/Cr2(r=0.523和0.475,均P=0.000)呈正相关;ROC曲线结果显示,血清MALAT1、TXNIP及两者联合预测缺血性脑卒中后认知障碍发生的AUC为0.860(95%CI:0.796,0.924)、0.780(95%CI:0.703,0.856)、0.890(95%CI:0.834,0.945),敏感性分别为86.7%(95%CI:0.776,0.933)、78.3%(95%CI:0.711,0.863)、90.2%(95%CI:0.839,0.951),特异性分别为79.7%(95%CI:0.702,0.869)、71.2%(95%CI:0.612,0.798)、85.1%(95%CI:0.736,0.915)。结论 缺血性脑卒中后认知障碍患者血清MALAT1、TXNIP水平呈高表达,两者水平与患者认知功能、磁共振波谱水平密切相关,且两者联合可有效预测认知障碍的发生。 展开更多
关键词 缺血性脑卒中 认知障碍 肺腺癌转移相关转录因子1 硫氧还蛋白互作蛋白
下载PDF
A new fragment re-allocation strategy for NoSQL database systems 被引量:3
2
作者 Zhikun CHEN Shuqiang YANG +3 位作者 shuang tan Li HE Hong YIN Ge ZHANG 《Frontiers of Computer Science》 SCIE EI CSCD 2015年第1期111-127,共17页
Abstract NoSQL databases are famed for the characteristics of high scalability, high availability, and high faulttolerance. So NoSQL databases are used in a lot of applications. The data partitioning strategy and frag... Abstract NoSQL databases are famed for the characteristics of high scalability, high availability, and high faulttolerance. So NoSQL databases are used in a lot of applications. The data partitioning strategy and fragment allocation strategy directly affect NoSQL database systems' performance. The data partition strategy of large, global databases is performed by horizontally, vertically partitioning or combination of both. In the general way the system scatters the related fragments as possible to improve operations' parallel degree. But the operations are usually not very complicated in some applications, and an operation may access to more than one fragment. At the same time, those fragments which have to be accessed by an operation may interact with each other. The general allocation strategies will increase system's communication cost during operations execution over sites. In order to improve those applications' performance and enable NoSQL database systems to work efficiently, these applications' fragments have to be allocated in a reasonable way that can reduce the communication cost i.e., to minimize the total volume of data transmitted during operations execution over sites. A strategy of clustering fragments based onhypergraph is proposed, which can cluster fragments which were accessed together in most operations to the same cluster. The method uses a weighted hypergraph to represent the fragments' access pattem of operations. A hypergraph partitioning algorithm is used to cluster fragments in our strategy. This method can reduce the amount of sites that an operation has to span. So it can reduce the communication cost over sites. Experimental results confirm that the proposed technique will effectively contribute in solving fragments re-allocation problem in a specific application environment of NoSQL database system. 展开更多
关键词 fragment allocation NoSQL database hypergraph partition clustering fragments fragment correlation
原文传递
NaEPASC:a novel and efficient public auditing scheme for cloud data 被引量:2
3
作者 shuang tan Yan JIA 《Journal of Zhejiang University-Science C(Computers and Electronics)》 SCIE EI 2014年第9期794-804,共11页
Cloud computing is deemed the next-generation information technology(IT) platform, in which a data center is crucial for providing a large amount of computing and storage resources for various service applications wit... Cloud computing is deemed the next-generation information technology(IT) platform, in which a data center is crucial for providing a large amount of computing and storage resources for various service applications with high quality guaranteed. However, cloud users no longer possess their data in a local data storage infrastructure,which would result in auditing for the integrity of outsourced data being a challenging problem, especially for users with constrained computing resources. Therefore, how to help the users complete the verification of the integrity of the outsourced data has become a key issue. Public verification is a critical technique to solve this problem, from which the users can resort to a third-party auditor(TPA) to check the integrity of outsourced data. Moreover,an identity-based(ID-based) public key cryptosystem would be an efficient key management scheme for certificatebased public key setting. In this paper, we combine ID-based aggregate signature and public verification to construct the protocol of provable data integrity. With the proposed mechanism, the TPA not only verifies the integrity of outsourced data on behalf of cloud users, but also alleviates the burden of checking tasks with the help of users' identity. Compared to previous research, the proposed scheme greatly reduces the time of auditing a single task on the TPA side. Security analysis and performance evaluation results show the high efficiency and security of the proposed scheme. 展开更多
关键词 Cloud storage Public verification Identity-based aggregate signature
原文传递
Topology awareness algorithm for virtual network mapping 被引量:1
4
作者 Xiao-ling LI Huai-min WANG +4 位作者 Chang-guo GUO Bo DING Xiao-yong LI Wen-qi BI shuang tan 《Journal of Zhejiang University-Science C(Computers and Electronics)》 SCIE EI 2012年第3期178-186,共9页
Network virtualization is recognized as an effective way to overcome the ossification of the Internet. However, the virtual network mapping problem (VNMP) is a critical challenge, focusing on how to map the virtual ne... Network virtualization is recognized as an effective way to overcome the ossification of the Internet. However, the virtual network mapping problem (VNMP) is a critical challenge, focusing on how to map the virtual networks to the substrate network with efficient utilization of infrastructure resources. The problem can be divided into two phases: node mapping phase and link mapping phase. In the node mapping phase, the existing algorithms usually map those virtual nodes with a complete greedy strategy, without considering the topology among these virtual nodes, resulting in too long substrate paths (with multiple hops). Addressing this problem, we propose a topology awareness mapping algorithm, which considers the topology among these virtual nodes. In the link mapping phase, the new algorithm adopts the k-shortest path algorithm. Simulation results show that the new algorithm greatly increases the long-term average revenue, the acceptance ratio, and the long-term revenue-to-cost ratio (R/C). 展开更多
关键词 Network virtualization OSSIFICATION Virtual network (VN) mapping Substrate network (SN) Topology awareness Acceptance ratio
原文传递
A high-resolution gas-kinetic scheme with minimized dispersion and controllable dissipation reconstruction
5
作者 shuang tan QiBing Li 《Science China(Physics,Mechanics & Astronomy)》 SCIE EI CAS CSCD 2017年第11期50-67,共18页
In order to simulate multiscale problems such as turbulent flows effectively, the high-order accurate reconstruction based on minimized dispersion and controllable dissipation(MDCD) is implemented in the second-order ... In order to simulate multiscale problems such as turbulent flows effectively, the high-order accurate reconstruction based on minimized dispersion and controllable dissipation(MDCD) is implemented in the second-order accurate gas-kinetic scheme(GKS) to improve the accuracy and resolution. MDCD is firstly extended to non-uniform grids through the modification of dissipation and dispersion coefficients for uniform grids based on the local stretch ratio. Remarkable improvements in accuracy and resolution are achieved on general grids. Then a new scheme, MDCD-GKS is constructed, with the help of MDCD reconstruction, not only for conservative variables, but also for their gradients. MDCD-GKS shows good accuracy and efficiency in typical numerical tests.MDCD-GKS is also coupled with the improved delayed detached-eddy simulation(IDDES) hybrid model and applied in the fine simulation of turbulent flow around a cylinder, and the prediction is in good agreement with experiments when using the relatively coarse grid. The high accuracy and resolution of the developed GKS guarantee its high efficiency in practical applications. 展开更多
关键词 GKS MDCD high-resolution non-uniform grid turbulence simulation
原文传递
上一页 1 下一页 到第
使用帮助 返回顶部