期刊文献+
共找到5,272篇文章
< 1 2 250 >
每页显示 20 50 100
多核处理器共享Cache的划分算法
1
作者 吕海玉 罗广 +1 位作者 朱嘉炜 张凤登 《电子科技》 2024年第9期27-33,共7页
针对多核处理器性能优化问题,文中深入研究多核处理器上共享Cache的管理策略,提出了基于缓存时间公平性与吞吐率的共享Cache划分算法MT-FTP(Memory Time based Fair and Throughput Partitioning)。以公平性和吞吐率两个评价性指标建立... 针对多核处理器性能优化问题,文中深入研究多核处理器上共享Cache的管理策略,提出了基于缓存时间公平性与吞吐率的共享Cache划分算法MT-FTP(Memory Time based Fair and Throughput Partitioning)。以公平性和吞吐率两个评价性指标建立数学模型,并分析了算法的划分流程。仿真实验结果表明,MT-FTP算法在系统吞吐率方面表现较好,其平均IPC(Instructions Per Cycles)值比UCP(Use Case Point)算法高1.3%,比LRU(Least Recently Used)算法高11.6%。MT-FTP算法对应的系统平均公平性比LRU算法的系统平均公平性高17%,比UCP算法的平均公平性高16.5%。该算法实现了共享Cache划分公平性并兼顾了系统的吞吐率。 展开更多
关键词 片上多核处理器 内存墙 划分 公平性 吞吐率 共享cache 缓存时间 集成计算机
下载PDF
内存高效的持久性分布式文件系统客户端缓存DFS-Cache
2
作者 倪瑞轩 蔡淼 叶保留 《计算机应用》 CSCD 北大核心 2024年第4期1172-1179,共8页
为了在数据密集型工作流下有效降低缓存碎片整理开销并提高缓存命中率,提出一种持久性分布式文件系统客户端缓存DFS-Cache(Distributed File System Cache)。DFS-Cache基于非易失性内存(NVM)设计实现,能够保证数据的持久性和崩溃一致性... 为了在数据密集型工作流下有效降低缓存碎片整理开销并提高缓存命中率,提出一种持久性分布式文件系统客户端缓存DFS-Cache(Distributed File System Cache)。DFS-Cache基于非易失性内存(NVM)设计实现,能够保证数据的持久性和崩溃一致性,并大幅减少冷启动时间。DFS-Cache包括基于虚拟内存重映射的缓存碎片整理机制和基于生存时间(TTL)的缓存空间管理策略。前者基于NVM可被内存控制器直接寻址的特性,动态修改虚拟地址和物理地址之间的映射关系,实现零拷贝的内存碎片整理;后者是一种冷热分离的分组管理策略,借助重映射的缓存碎片整理机制,提升缓存空间的管理效率。实验采用真实的Intel傲腾持久性内存设备,对比商用的分布式文件系统MooseFS和GlusterFS,采用Fio和Filebench等标准测试程序,DFS-Cache最高能提升5.73倍和1.89倍的系统吞吐量。 展开更多
关键词 非易失性内存 分布式文件系统 客户端缓存 缓存碎片整理 冷热数据分组 缓存设计
下载PDF
R-DSP中二级Cache控制器的优化设计
3
作者 谭露露 谭勋琼 白创 《电子与封装》 2024年第7期63-68,共6页
针对二级Cache控制器(L2)对于提升R数字信号处理器(R-DSP)访存效率和整体性能的重要作用,结合L2中涉及的内存安全维护和多请求访存仲裁问题,在现有R-DSP中L2基础上实现优化。首先,采用多重分块的存储组织结构,提高访存效率;其次,并行处... 针对二级Cache控制器(L2)对于提升R数字信号处理器(R-DSP)访存效率和整体性能的重要作用,结合L2中涉及的内存安全维护和多请求访存仲裁问题,在现有R-DSP中L2基础上实现优化。首先,采用多重分块的存储组织结构,提高访存效率;其次,并行处理一级Cache控制器请求与外存请求,减小请求处理周期;最后,增加带宽管理与存储保护功能,合理仲裁访存请求并维护存储安全。实验结果表明,相较于传统设计,新设计在保护二级存储安全的同时实现带宽管理式访存仲裁。与现有R-DSP中的L2相比,新设计的存储体单拍最大可响应访存请求数量提升了1倍,一级请求和外存请求的平均处理时钟周期数分别降低了25%和19.6%。 展开更多
关键词 DSP 二级cache 存储结构 并行处理 存储保护 带宽管理
下载PDF
一种带Cache加速的HyperRAM控制器设计与验证
4
作者 邹敏 鲁澳宇 +1 位作者 邹望辉 喻华 《现代电子技术》 北大核心 2024年第6期91-96,共6页
针对目前可穿戴设备上对存储设备性能要求高、体积小、功耗低等问题,在FPGA上实现了一款可拓展的高性能HyperRAM控制器,并引入Cache缓存加速设计,以提高对频繁访问数据的命中率和优化存储器访问模式,实现更高速的数据传输和优化的系统... 针对目前可穿戴设备上对存储设备性能要求高、体积小、功耗低等问题,在FPGA上实现了一款可拓展的高性能HyperRAM控制器,并引入Cache缓存加速设计,以提高对频繁访问数据的命中率和优化存储器访问模式,实现更高速的数据传输和优化的系统性能。运用UVM验证方法学和FPGA进行验证,结果表明,带有Cache缓存的HyperRAM控制器相较于普通HyperRAM,在读写连续地址时性能提高61%,并具有较好的可靠性与有效性,可为嵌入式系统提供高效、灵活的存储器解决方案。 展开更多
关键词 HyperRAM控制器 cache缓存 可穿戴设备 存储器 UVM验证方法学 FPGA
下载PDF
Cache侧信道攻击防御量化研究
5
作者 王占鹏 朱子元 王立敏 《信息安全学报》 CSCD 2024年第4期107-124,共18页
芯片安全防护技术关系到国家、企业和个人的信息安全,相关的研究一直是计算机安全领域的热点。片上高速缓存对芯片性能起着重要作用,可以有效提升芯片内核访问效率。传统的缓存设计并没有充分考虑安全性,侧信道攻击会对Cache造成巨大威... 芯片安全防护技术关系到国家、企业和个人的信息安全,相关的研究一直是计算机安全领域的热点。片上高速缓存对芯片性能起着重要作用,可以有效提升芯片内核访问效率。传统的缓存设计并没有充分考虑安全性,侧信道攻击会对Cache造成巨大威胁,可以窃取加密密钥等内存存储敏感信息。攻击者利用侧信道的技术窃取用户的隐私数据或加密算法密钥时不会改变片上系统芯片的运行状态,从而使计算机系统很难检测是否受到了攻击。与基于电磁信号和基于能量检测的侧信道攻击相比,基于存储共享的侧信道攻击只需要利用软件测量就可以实现,对芯片安全的威胁更大。目前存在多种侧信道攻击和防御手段,但缺乏一套完善的关于系统架构的安全度量方法,对Cache的安全性进行有效评估。本文对Cache侧信道攻击和防御手段进行模型化分析,提出一套Cache安全性量化研究方法。首先,我们采用CVSS漏洞评分模型对Cache侧信道攻击进行量化评分。然后,利用贝叶斯公式,构建侧信道攻击和防御的关系模型。最后,通过图模型对Cache侧信道攻击机理进行建模,计算在防御架构基础上不同威胁的攻击成功率,并结合CVSS防御得分求得不同防御方法的得分。本文针对Cache侧信道攻击进行机理建模,对攻击和防御进行评估和探索,为硬件安全人员提供理论支持。 展开更多
关键词 cache侧信道 CVSS 贝叶斯模型 安全量化 安全架构
下载PDF
基于Cache优化的服务调用方法
6
作者 杨国胜 杨毅 +1 位作者 王海 段锴 《数字技术与应用》 2024年第4期60-63,共4页
集中式服务网关通常使用共享内存进行服务实例与治理参数的本地化生产与消费,实现业务处理与服务发现逻辑的解耦,增强系统的稳定性,但频繁的共享内存操作往往带来系统资源利用率和请求处理耗时上的低效。通过引入缓存机制,在服务网关的... 集中式服务网关通常使用共享内存进行服务实例与治理参数的本地化生产与消费,实现业务处理与服务发现逻辑的解耦,增强系统的稳定性,但频繁的共享内存操作往往带来系统资源利用率和请求处理耗时上的低效。通过引入缓存机制,在服务网关的路由组件内部实现并利用针对服务调用优化的Cache,热点数据请求直接从Cache中读取结构化信息,避免了共享内存操作与存储块的编解码,有效地利用缓存空间,提高了数据访问速度,同时减少了共享内存操作中的资源竞争,提高了系统并发。 展开更多
关键词 共享内存 服务网关 缓存机制 服务实例 cache 结构化信息 热点数据 缓存空间
下载PDF
Efficient cache replacement framework based on access hotness for spacecraft processors
7
作者 GAO Xin NIAN Jiawei +1 位作者 LIU Hongjin YANG Mengfei 《中国空间科学技术(中英文)》 CSCD 北大核心 2024年第2期74-88,共15页
A notable portion of cachelines in real-world workloads exhibits inner non-uniform access behaviors.However,modern cache management rarely considers this fine-grained feature,which impacts the effective cache capacity... A notable portion of cachelines in real-world workloads exhibits inner non-uniform access behaviors.However,modern cache management rarely considers this fine-grained feature,which impacts the effective cache capacity of contemporary high-performance spacecraft processors.To harness these non-uniform access behaviors,an efficient cache replacement framework featuring an auxiliary cache specifically designed to retain evicted hot data was proposed.This framework reconstructs the cache replacement policy,facilitating data migration between the main cache and the auxiliary cache.Unlike traditional cacheline-granularity policies,the approach excels at identifying and evicting infrequently used data,thereby optimizing cache utilization.The evaluation shows impressive performance improvement,especially on workloads with irregular access patterns.Benefiting from fine granularity,the proposal achieves superior storage efficiency compared with commonly used cache management schemes,providing a potential optimization opportunity for modern resource-constrained processors,such as spacecraft processors.Furthermore,the framework complements existing modern cache replacement policies and can be seamlessly integrated with minimal modifications,enhancing their overall efficacy. 展开更多
关键词 spacecraft processors cache management replacement policy storage efficiency memory hierarchy MICROARCHITECTURE
下载PDF
Graph4Cache:一种用于缓存预取的图神经网络模型
8
作者 尚晶 武智晖 +1 位作者 肖智文 张逸飞 《计算机研究与发展》 EI CSCD 北大核心 2024年第8期1945-1956,共12页
大多数计算系统利用缓存来减少数据访问时间,加快数据处理并平衡服务负载.缓存管理的关键在于确定即将被加载到缓存中或从缓存中丢弃的合适数据,以及进行缓存置换的合适时机,这对于提高缓存命中率至关重要.现有的缓存方案面临2个问题:... 大多数计算系统利用缓存来减少数据访问时间,加快数据处理并平衡服务负载.缓存管理的关键在于确定即将被加载到缓存中或从缓存中丢弃的合适数据,以及进行缓存置换的合适时机,这对于提高缓存命中率至关重要.现有的缓存方案面临2个问题:在实时的、在线的缓存场景下难以洞察用户访问数据的热度信息,以及忽略了数据访问序列之间复杂的高阶信息.提出了一个基于GNN的缓存预取网络Graph4Cache.通过将单个访问序列建模为有向图(ASGraph),并引入虚拟节点聚合图中所有节点的信息和表示整个序列.然后由ASGraph的虚拟节点构造一个跨序列无向图(CSGraph)来学习跨序列特征,这极大地丰富了单个序列中有限的数据项转换模式.通过融合这2种图结构的信息,学习到了序列之间的高阶关联信息,并获取了丰富的用户意图.在多个公共数据集上的实验结果证明了该方法的有效性.Graph4Cache在P@20和MRR@20上均优于现有的缓存预测算法. 展开更多
关键词 图神经网络 缓存预取 访问序列图 跨序列图 缓存预测
下载PDF
Caché数据库中数据的存储及其查询优化
9
作者 牛彩云 王建林 +1 位作者 光奇 樊睿 《信息技术与信息化》 2024年第1期17-21,共5页
Caché数据库的多维数据模型可以存储丰富的数据,在处理复杂的医疗数据时减少了表连接等处理过程,从而使多维数组能更快地存取数据。与主流的Oracle和SQL server等关系型数库相比,Caché主要在其存储结构上有很大的不同,Cach... Caché数据库的多维数据模型可以存储丰富的数据,在处理复杂的医疗数据时减少了表连接等处理过程,从而使多维数组能更快地存取数据。与主流的Oracle和SQL server等关系型数库相比,Caché主要在其存储结构上有很大的不同,Caché主要是以Global的形式存储数据,依据M语言开发应用程序。首先,介绍了Caché数据库中数据的存储形式;然后,展示了在医院HIS系统应用过程中Caché数据库中数据查询的几种方式及应用场合;最后,总结Caché数据库中SQL优化的几种办法。结果表明,Caché数据库具有更高的灵活性,适用于多种应用场合,而且在采用优化的查询方案后查询效率提高了很多倍。 展开更多
关键词 caché数据库 多维数据模型 查询优化 SQL语句 数据存储
下载PDF
一种面向多核独享L2 Cache的缓存一致性设计实现
10
作者 马良骥 杨靓 +2 位作者 肖建青 娄冕 赵翠华 《微电子学与计算机》 2023年第10期102-109,共8页
近年来,独享L2 Cache是实现高性能多核处理器的主流架构,但是该架构在维护Cache一致性上需要多次访存,增加了系统开销.为此,本文基于PowerPC指令架构实现了一种基于私有Cache状态机与片上总线监测机制相融合的多核缓存一致性设计,使处... 近年来,独享L2 Cache是实现高性能多核处理器的主流架构,但是该架构在维护Cache一致性上需要多次访存,增加了系统开销.为此,本文基于PowerPC指令架构实现了一种基于私有Cache状态机与片上总线监测机制相融合的多核缓存一致性设计,使处理器之间可以直接通过干涉接口交互数据.采用硬件描述语言Verilog HDL设计并实现了该多核缓存结构,仿真结果表明,在实现缓存一致性时,这种具有干涉路径的结构相比于传统访存方法最大能够节省87.06%的时间开销,有效地提升了多核处理器性能.最后经过实物芯片在板级上的测试,与仿真结果保持一致. 展开更多
关键词 多核一致性 独享L2 cache PLB总线 干涉接口
下载PDF
一种面向二维三维卷积的GPGPU cache旁路系统
11
作者 贾世伟 张玉明 +2 位作者 秦翔 孙成璐 田泽 《西安电子科技大学学报》 EI CAS CSCD 北大核心 2023年第2期92-100,共9页
通用图形处理器作为卷积神经网络的核心加速平台,其处理二维、三维卷积的性能,决定着神经网络在实时目标识别检测领域的有效应用。然而,受其固有cache系统功能的限制,当前通用图形处理器架构无法实现二维、三维卷积的高效加速。针对此问... 通用图形处理器作为卷积神经网络的核心加速平台,其处理二维、三维卷积的性能,决定着神经网络在实时目标识别检测领域的有效应用。然而,受其固有cache系统功能的限制,当前通用图形处理器架构无法实现二维、三维卷积的高效加速。针对此问题,首先提出一种L1Dcache动态旁路设计方案。该方案定义了一组能够动态反映指令访问cache特征的数据结构,并基于此数据结构定义访存特征记录表,以记录不同访存指令在请求cache时的执行状态。其次,采用优先线程块的warp调度策略来加速访存状态的采样。最后根据访存状态得出不同PC值下访存请求对L1Dcache的旁路的判定,并动态完成部分低局域性数据请求对L1Dcache的旁路。由此将L1Dcache空间保留给高局域性的数据并降低二维、三维卷积执行时的访存阻塞周期,进而提升了二维、三维卷积在通用图形处理器上执行时的访存效率。实验结果表明,相比原架构,在面向二维、三维卷积时分别带来了约2.16%与19.79%的性能提升,体现了设计方案的有效性与实用性。 展开更多
关键词 卷积 通用图形处理器 存储系统 cache旁路
下载PDF
Cache in fog computing design,concepts,contributions,and security issues in machine learning prospective
12
作者 Muhammad Ali Naeem Yousaf Bin Zikria +3 位作者 Rashid Ali Usman Tariq Yahui Meng Ali Kashif Bashir 《Digital Communications and Networks》 SCIE CSCD 2023年第5期1033-1052,共20页
The massive growth of diversified smart devices and continuous data generation poses a challenge to communication architectures.To deal with this problem,communication networks consider fog computing as one of promisi... The massive growth of diversified smart devices and continuous data generation poses a challenge to communication architectures.To deal with this problem,communication networks consider fog computing as one of promising technologies that can improve overall communication performance.It brings on-demand services proximate to the end devices and delivers the requested data in a short time.Fog computing faces several issues such as latency,bandwidth,and link utilization due to limited resources and the high processing demands of end devices.To this end,fog caching plays an imperative role in addressing data dissemination issues.This study provides a comprehensive discussion of fog computing,Internet of Things(IoTs)and the critical issues related to data security and dissemination in fog computing.Moreover,we determine the fog-based caching schemes and contribute to deal with the existing issues of fog computing.Besides,this paper presents a number of caching schemes with their contributions,benefits,and challenges to overcome the problems and limitations of fog computing.We also identify machine learning-based approaches for cache security and management in fog computing,as well as several prospective future research directions in caching,fog computing,and machine learning. 展开更多
关键词 Internet of things Cloud computing Fog computing cachING LATENCY
下载PDF
Request pattern change-based cache pollution attack detection and defense in edge computing
13
作者 Junwei Wang Xianglin Wei +3 位作者 Jianhua Fan Qiang Duan Jianwei Liu Yangang Wang 《Digital Communications and Networks》 SCIE CSCD 2023年第5期1212-1220,共9页
Through caching popular contents at the network edge,wireless edge caching can greatly reduce both the content request latency at mobile devices and the traffic burden at the core network.However,popularity-based cach... Through caching popular contents at the network edge,wireless edge caching can greatly reduce both the content request latency at mobile devices and the traffic burden at the core network.However,popularity-based caching strategies are vulnerable to Cache Pollution Attacks(CPAs)due to the weak security protection at both edge nodes and mobile devices.In CPAs,through initiating a large number of requests for unpopular contents,malicious users can pollute the edge caching space and degrade the caching efficiency.This paper firstly integrates the dynamic nature of content request and mobile devices into the edge caching framework,and introduces an eavesdroppingbased CPA strategy.Then,an edge caching mechanism,which contains a Request Pattern Change-based Cache Pollution Detection(RPC2PD)algorithm and an Attack-aware Cache Defense(ACD)algorithm,is proposed to defend against CPAs.Simulation results show that the proposed mechanism could effectively suppress the effects of CPAs on the caching performance and improve the cache hit ratio. 展开更多
关键词 Mobile edge computing cache pollution attack Flash crowd
下载PDF
Power Information System Database Cache Model Based on Deep Machine Learning
14
作者 Manjiang Xing 《Intelligent Automation & Soft Computing》 SCIE 2023年第7期1081-1090,共10页
At present,the database cache model of power information system has problems such as slow running speed and low database hit rate.To this end,this paper proposes a database cache model for power information systems ba... At present,the database cache model of power information system has problems such as slow running speed and low database hit rate.To this end,this paper proposes a database cache model for power information systems based on deep machine learning.The caching model includes program caching,Structured Query Language(SQL)preprocessing,and core caching modules.Among them,the method to improve the efficiency of the statement is to adjust operations such as multi-table joins and replacement keywords in the SQL optimizer.Build predictive models using boosted regression trees in the core caching module.Generate a series of regression tree models using machine learning algorithms.Analyze the resource occupancy rate in the power information system to dynamically adjust the voting selection of the regression tree.At the same time,the voting threshold of the prediction model is dynamically adjusted.By analogy,the cache model is re-initialized.The experimental results show that the model has a good cache hit rate and cache efficiency,and can improve the data cache performance of the power information system.It has a high hit rate and short delay time,and always maintains a good hit rate even under different computer memory;at the same time,it only occupies less space and less CPU during actual operation,which is beneficial to power The information system operates efficiently and quickly. 展开更多
关键词 Deep machine learning power information system DATABASE cache model
下载PDF
Shared Cache Based on Content Addressable Memory in a Multi-Core Architecture
15
作者 Allam Abumwais Mahmoud Obaid 《Computers, Materials & Continua》 SCIE EI 2023年第3期4951-4963,共13页
Modern shared-memory multi-core processors typically have shared Level 2(L2)or Level 3(L3)caches.Cache bottlenecks and replacement strategies are the main problems of such architectures,where multiple cores try to acc... Modern shared-memory multi-core processors typically have shared Level 2(L2)or Level 3(L3)caches.Cache bottlenecks and replacement strategies are the main problems of such architectures,where multiple cores try to access the shared cache simultaneously.The main problem in improving memory performance is the shared cache architecture and cache replacement.This paper documents the implementation of a Dual-Port Content Addressable Memory(DPCAM)and a modified Near-Far Access Replacement Algorithm(NFRA),which was previously proposed as a shared L2 cache layer in a multi-core processor.Standard Performance Evaluation Corporation(SPEC)Central Processing Unit(CPU)2006 benchmark workloads are used to evaluate the benefit of the shared L2 cache layer.Results show improved performance of the multicore processor’s DPCAM and NFRA algorithms,corresponding to a higher number of concurrent accesses to shared memory.The new architecture significantly increases system throughput and records performance improvements of up to 8.7%on various types of SPEC 2006 benchmarks.The miss rate is also improved by about 13%,with some exceptions in the sphinx3 and bzip2 benchmarks.These results could open a new window for solving the long-standing problems with shared cache in multi-core processors. 展开更多
关键词 Multi-core processor shared cache content addressable memory dual port CAM replacement algorithm benchmark program
下载PDF
Joint User Association and Caching Placement for Cache-Enabling UAV Networks
16
作者 Tiankui Zhang Chao Chen Dingcheng Yang 《China Communications》 SCIE CSCD 2023年第6期291-309,共19页
Cache-enabling unmanned aerial vehicles(UAVs)are considered for storing popular contents and providing downlink data offloading in cellular networks.In this context,we formulate a joint optimization problem of user as... Cache-enabling unmanned aerial vehicles(UAVs)are considered for storing popular contents and providing downlink data offloading in cellular networks.In this context,we formulate a joint optimization problem of user association,caching placement,and backhaul bandwidth allocation for minimizing content acquisition delay with consideration of UAVs’energy constraint.We decompose the formulated problem into two subproblems:i)user association and caching placement and ii)backhaul bandwidth allocation.We first obtain the optimal bandwidth allocation with given user association and caching placement by the Lagrangian multiplier approach.After that,embedding the backhaul bandwidth allocation algorithm,we solve the user association and caching placement problem by a threedimensional(3D)matching method.Then we decompose it into two two-dimensional(2D)matching problems and develop low-complexity algorithms.The proposed scheme converges and exhibits a low computational complexity.Simulation results demonstrate that the proposed cache-enabling UAV framework outperforms the conventional UAV-assisted cellular networks in terms of content acquisition delay and the proposed scheme achieves significantly lower content acquisition delay compared with other two benchmark schemes. 展开更多
关键词 edge caching unmanned aerial vehicles user association three-dimensional(3D)matching
下载PDF
A Time Pattern-Based Intelligent Cache Optimization Policy on Korea Advanced Research Network
17
作者 Waleed Akbar Afaq Muhammad Wang-Cheol Song 《Intelligent Automation & Soft Computing》 SCIE 2023年第6期3743-3759,共17页
Data is growing quickly due to a significant increase in social media applications.Today,billions of people use an enormous amount of data to access the Internet.The backbone network experiences a substantial load as ... Data is growing quickly due to a significant increase in social media applications.Today,billions of people use an enormous amount of data to access the Internet.The backbone network experiences a substantial load as a result of an increase in users.Users in the same region or company frequently ask for similar material,especially on social media platforms.The subsequent request for the same content can be satisfied from the edge if stored in proximity to the user.Applications that require relatively low latency can use Content Delivery Network(CDN)technology to meet their requirements.An edge and the data center con-stitute the CDN architecture.To fulfill requests from the edge and minimize the impact on the network,the requested content can be buffered closer to the user device.Which content should be kept on the edge is the primary concern.The cache policy has been optimized using various conventional and unconventional methods,but they have yet to include the timestamp beside a video request.The 24-h content request pattern was obtained from publicly available datasets.The popularity of a video is influenced by the time of day,as shown by a time-based video profile.We present a cache optimization method based on a time-based pat-tern of requests.The problem is described as a cache hit ratio maximization pro-blem emphasizing a relevance score and machine learning model accuracy.A model predicts the video to be cached in the next time stamp,and the relevance score identifies the video to be removed from the cache.Afterwards,we gather the logs and generate the content requests using an extracted video request pattern.These logs are pre-processed to create a dataset divided into three-time slots per day.A Long short-term memory(LSTM)model is trained on this dataset to forecast the video at the next time interval.The proposed optimized caching policy is evaluated on our CDN architecture deployed on the Korean Advanced Research Network(KOREN)infrastructure.Our findings demonstrate how add-ing time-based request patterns impacts the system by increasing the cache hit rate.To show the effectiveness of the proposed model,we compare the results with state-of-the-art techniques. 展开更多
关键词 Multimedia content delivery request pattern recognition real-time machine learning deep learning optimization cachING edge computing
下载PDF
Failure mechanisms and destruction characteristics of cemented coal gangue backfill under compression effect of non-uniform load
18
作者 FENG Guo-rui GUO Wei +5 位作者 QI Ting-ye LI Zhu CUI Jia-qing WANG Hao-chen CUI Ye-kai MA Jing-kai 《Journal of Central South University》 SCIE EI CAS CSCD 2024年第8期2676-2693,共18页
Backfill mining is one of the most important technical means for controlling strata movement and reducing surface subsidence and environmental damage during exploitation of underground coal resources. Ensuring the sta... Backfill mining is one of the most important technical means for controlling strata movement and reducing surface subsidence and environmental damage during exploitation of underground coal resources. Ensuring the stability of the backfill bodies is the primary prerequisite for maintaining the safety of the backfilling working face, and the loading characteristics of backfill are closely related to the deformation and subsidence of the roof. Elastic thin plate model was used to explore the non-uniform subsidence law of the roof, and then the non-uniform distribution characteristics of backfill bodies’ load were revealed. Through a self-developed non-uniform loading device combined with acoustic emission (AE) and digital image correlation (DIC) monitoring technology, the synergistic dynamic evolution law of the bearing capacity, apparent crack, and internal fracture of cemented coal gangue backfills (CCGBs) under loads with different degrees of non-uniformity was deeply explored. The results showed that: 1) The uniaxial compressive strength (UCS) of CCGB increased and then decreased with an increase in the degree of non-uniformity of load (DNL). About 40% of DNL was the inflection point of DNL-UCS curve and when DNL exceeded 40%, the strength decreased in a cliff-like manner;2) A positive correlation was observed between the AE ringing count and UCS during the loading process of the specimen, which was manifested by a higher AE ringing count of the high-strength specimen. 3) Shear cracks gradually increased and failure mode of specimens gradually changed from “X” type dominated by tension cracks to inverted “Y” type dominated by shear cracks with an increase in DNL, and the crack opening displacement at the peak stress decreased and then increased. The crack opening displacement at 40% of the DNL was the smallest. This was consistent with the judgment of crack size based on the AE b-value, i. e., it showed the typical characteristics of “small b-value-large crack and large b-value-small crack”. The research results are of significance for preventing the instability and failure of backfill. 展开更多
关键词 cemented coal gangue backfill non-uniform load degree of non-uniformity of load failure mode crack opening displacement
下载PDF
Towards Cache-Assisted Hierarchical Detection for Real-Time Health Data Monitoring in IoHT
19
作者 Muhammad Tahir Mingchu Li +4 位作者 Irfan Khan Salman AAl Qahtani Rubia Fatima Javed Ali Khan Muhammad Shahid Anwar 《Computers, Materials & Continua》 SCIE EI 2023年第11期2529-2544,共16页
Real-time health data monitoring is pivotal for bolstering road services’safety,intelligence,and efficiency within the Internet of Health Things(IoHT)framework.Yet,delays in data retrieval can markedly hinder the eff... Real-time health data monitoring is pivotal for bolstering road services’safety,intelligence,and efficiency within the Internet of Health Things(IoHT)framework.Yet,delays in data retrieval can markedly hinder the efficacy of big data awareness detection systems.We advocate for a collaborative caching approach involving edge devices and cloud networks to combat this.This strategy is devised to streamline the data retrieval path,subsequently diminishing network strain.Crafting an adept cache processing scheme poses its own set of challenges,especially given the transient nature of monitoring data and the imperative for swift data transmission,intertwined with resource allocation tactics.This paper unveils a novel mobile healthcare solution that harnesses the power of our collaborative caching approach,facilitating nuanced health monitoring via edge devices.The system capitalizes on cloud computing for intricate health data analytics,especially in pinpointing health anomalies.Given the dynamic locational shifts and possible connection disruptions,we have architected a hierarchical detection system,particularly during crises.This system caches data efficiently and incorporates a detection utility to assess data freshness and potential lag in response times.Furthermore,we introduce the Cache-Assisted Real-Time Detection(CARD)model,crafted to optimize utility.Addressing the inherent complexity of the NP-hard CARD model,we have championed a greedy algorithm as a solution.Simulations reveal that our collaborative caching technique markedly elevates the Cache Hit Ratio(CHR)and data freshness,outshining its contemporaneous benchmark algorithms.The empirical results underscore the strength and efficiency of our innovative IoHT-based health monitoring solution.To encapsulate,this paper tackles the nuances of real-time health data monitoring in the IoHT landscape,presenting a joint edge-cloud caching strategy paired with a hierarchical detection system.Our methodology yields enhanced cache efficiency and data freshness.The corroborative numerical data accentuates the feasibility and relevance of our model,casting a beacon for the future trajectory of real-time health data monitoring systems. 展开更多
关键词 Real-time health data monitoring cache-Assisted Real-Time Detection(CARD) edge-cloud collaborative caching scheme hierarchical detection Internet of Health Things(IoHT)
下载PDF
Study of the pressure transient behavior of directional wells considering the effect of non-uniform flux distribution
20
作者 Yan-Zhong Liang Bai-Lu Teng Wan-Jing Luo 《Petroleum Science》 SCIE EI CAS CSCD 2024年第3期1765-1779,共15页
During the production,the fluid in the vicinity of the directional well enters the wellbore with different rates,leading to non-uniform flux distribution along the directional well.However,in all existing studies,it i... During the production,the fluid in the vicinity of the directional well enters the wellbore with different rates,leading to non-uniform flux distribution along the directional well.However,in all existing studies,it is oversimplified to a uniform flux distribution,which can result in inaccurate results for field applications.Therefore,this paper proposes a semi-analytical model of a directional well based on the assumption of non-uniform flux distribution.Specifically,the direction well is discretized into a carefully chosen series of linear sources,such that the complex well trajectory can be captured and the nonuniform flux distribution along the wellbore can be considered to model the three-dimensional flow behavior.By using the finite difference method,we can obtain the numerical solutions of the transient flow within the wellbore.With the aid of Green's function method,we can obtain the analytical solutions of the transient flow from the matrix to the wellbore.The complete flow behavior of a directional well is perfectly represented by coupling the above two types of transient flow.Subsequently,on the basis of the proposed model,we conduct a comprehensive analysis of the pressure transient behavior of a directional well.The computation results show that the flux variation along the direction well has a significant effect on pressure responses.In addition,the directional well in an infinite reservoir may exhibit the following flow regimes:wellbore afterflow,transition flow,inclined radial flow,elliptical flow,horizontal linear flow,and horizontal radial flow.The horizontal linear flow can be observed only if the formation thickness is much smaller than the well length.Furthermore,a dip region that appears on the pressure derivative curve indicates the three-dimensional flow behavior near the wellbore. 展开更多
关键词 Directional well Pressure transient behavior Semi-analytical model non-uniform flux
下载PDF
上一页 1 2 250 下一页 到第
使用帮助 返回顶部