期刊文献+
共找到11篇文章
< 1 >
每页显示 20 50 100
NPBMT: A Novel and Proficient Buffer Management Technique for Internet of Vehicle-Based DTNs
1
作者 Sikandar Khan Khalid Saeed +3 位作者 Muhammad Faran Majeed Salman A.AlQahtani Khursheed Aurangzeb Muhammad Shahid Anwar 《Computers, Materials & Continua》 SCIE EI 2023年第10期1303-1323,共21页
Delay Tolerant Networks(DTNs)have the major problem of message delay in the network due to a lack of endto-end connectivity between the nodes,especially when the nodes are mobile.The nodes in DTNs have limited buffer ... Delay Tolerant Networks(DTNs)have the major problem of message delay in the network due to a lack of endto-end connectivity between the nodes,especially when the nodes are mobile.The nodes in DTNs have limited buffer storage for storing delayed messages.This instantaneous sharing of data creates a low buffer/shortage problem.Consequently,buffer congestion would occur and there would be no more space available in the buffer for the upcoming messages.To address this problem a buffer management policy is proposed named“A Novel and Proficient Buffer Management Technique(NPBMT)for the Internet of Vehicle-Based DTNs”.NPBMT combines appropriate-size messages with the lowest Time-to-Live(TTL)and then drops a combination of the appropriate messages to accommodate the newly arrived messages.To evaluate the performance of the proposed technique comparison is done with Drop Oldest(DOL),Size Aware Drop(SAD),and Drop Larges(DLA).The proposed technique is implemented in the Opportunistic Network Environment(ONE)simulator.The shortest path mapbased movement model has been used as the movement path model for the nodes with the epidemic routing protocol.From the simulation results,a significant change has been observed in the delivery probability as the proposed policy delivered 380 messages,DOL delivered 186 messages,SAD delivered 190 messages,and DLA delivered only 95 messages.A significant decrease has been observed in the overhead ratio,as the SAD overhead ratio is 324.37,DLA overhead ratio is 266.74,and DOL and NPBMT overhead ratios are 141.89 and 52.85,respectively,which reveals a significant reduction of overhead ratio in NPBMT as compared to existing policies.The network latency average of DOL is 7785.5,DLA is 5898.42,and SAD is 5789.43 whereas the NPBMT latency average is 3909.4.This reveals that the proposed policy keeps the messages for a short time in the network,which reduces the overhead ratio. 展开更多
关键词 Delay tolerant networks buffer management message drop policy ONE simulator NPBMT
下载PDF
A Utility-Based Buffer Management Policy for Improving Data Dissemination in Opportunistic Networks 被引量:5
2
作者 Jiansheng Yao Chunguang Ma +2 位作者 Haitao Yu Yanling Liu Qi Yuan 《China Communications》 SCIE CSCD 2017年第7期118-126,共9页
In opportunistic networks, most existing buffer management policies including scheduling and passive dropping policies are mainly for routing protocols. In this paper, we proposed a Utility-based Buffer Management str... In opportunistic networks, most existing buffer management policies including scheduling and passive dropping policies are mainly for routing protocols. In this paper, we proposed a Utility-based Buffer Management strategy(UBM) for data dissemination in opportunistic networks. In UBM, we first design a method of computing the utility values of caching messages according to the interest of nodes and the delivery probability of messages, and then propose an overall buffer management policy based on the utility. UBM driven by receivers completely implements not only caching policies, passive and proactive dropping policies, but also scheduling policies of senders. Simulation results show that, compared with some classical dropping strategies, UBM can obtain higher delivery ratio and lower delay latency by using smaller network cost. 展开更多
关键词 opportunistic networks data dissemination buffer management strategy utility-based
下载PDF
Buffer management optimization strategy for satellite ATM
3
作者 Lu Rang Cao Zhigang 《Journal of Systems Engineering and Electronics》 SCIE EI CSCD 2006年第1期19-23,共5页
ECFD (erroneous cell tail drop), a buffer management optimization strategy is suggested which can improve the utilization of buffer resources in satellite ATM (asynchronous transfer mode) networks. The strategy, i... ECFD (erroneous cell tail drop), a buffer management optimization strategy is suggested which can improve the utilization of buffer resources in satellite ATM (asynchronous transfer mode) networks. The strategy, in which erroneous cells caused by satellite channel and the following cells that belong to the same PDU (protocol data Unit) are discarded, concerns non-real-time data services that use higher layer protocol for retransmission. Based on EPD (early packet drop) policy, mathematical models are established with and without ECTD. The numerical results show that ECTD would optimize buffer management and improve effective throughput (goodput), and the increment of goodput is relative to the CER (cell error ratio) and the PDU length. The higher their values are, the greater the increment. For example, when the average PDU length values are 30 and 90, the improvement of goodput are respectively about 4% and 10%. 展开更多
关键词 asynchronous transfer mode satellite ATM buffer management early packet drop erroneous cell tail drop.
下载PDF
STUDY AND ENHANCEMENT ON BUFFER MANAGEMENT FOR SMOOTH HANDOVER IN MIPV6
4
作者 LinHuasheng JinYuehui ChengShiduan FanRui 《Journal of Electronics(China)》 2005年第2期131-141,共11页
For improving Transfer Control Protocol (TCP) performance in mobile environment,smooth handover with buffer management has been proposed to realize seamless handovers. However in our simulation, even if smooth handove... For improving Transfer Control Protocol (TCP) performance in mobile environment,smooth handover with buffer management has been proposed to realize seamless handovers. However in our simulation, even if smooth handover in Mobile IPv6 (MIPv6) is implemented, TCP can not always achieve better performance due to packets forwarding burst. Based on the study of buffer management for smooth handover, this paper proposes an enhanced buffer management scheme for smooth handover to improve TCP performance. In this scheme, a packet-pair probing technology is adopted to estimate the available bandwidth of the new path from Previous router (Prtr) to Mobile Node (MN), which will be used by Prtr to control the buffered packets forwarding. The simulation results demonstrate that smooth handover with this scheme can achieve better TCP performance than the original scheme. 展开更多
关键词 Transfer Control Protocol (TCP) Smooth handover buffer management Packet-pair technology
下载PDF
HAT: an efficient buffer management method for flash-based hybrid storage systems 被引量:1
5
作者 Yanfei LV Bin CUI +1 位作者 Xuexuan CHEN Jing LI 《Frontiers of Computer Science》 SCIE EI CSCD 2014年第3期440-455,共16页
Flash solid-state drives (SSDs) provide much faster access to data compared with traditional hard disk drives (HDDs). The current price and performance of SSD suggest it can be adopted as a data buffer between mai... Flash solid-state drives (SSDs) provide much faster access to data compared with traditional hard disk drives (HDDs). The current price and performance of SSD suggest it can be adopted as a data buffer between main memory and HDD, and buffer management policy in such hybrid systems has attracted more and more interest from research community recently. In this paper, we propose a novel approach to manage the buffer in flash-based hybrid storage systems, named hotness aware hit (HAT). HAT exploits a page reference queue to record the access history as well as the status of accessed pages, i.e., hot, warm, and cold. Additionally, the page reference queue is further split into hot and warm regions which correspond to the memory and flash in general. The HAT approach updates the page status and deals with the page migration in the memory hierarchy according to the current page status and hit position in the page reference queue. Compared with the existing hybrid storage approaches, the proposed HAT can manage the memory and flash cache layers more effectively. Our empirical evaluation on benchmark traces demonstrates the superiority of the proposed strategy against the state-of-the-art competitors. 展开更多
关键词 flash memory SSD hybrid storage buffer management hotness aware
原文传递
Robust Buffer Management Mechanism in Quality of Service Routers 被引量:1
6
作者 汪浩 成敏娟 田作华 《Journal of Shanghai Jiaotong university(Science)》 EI 2011年第4期452-458,共7页
Active queue management(AQM) is essentially a router buffer management strategy supporting TCP congestion control.Since existing AQM schemes exhibit poor performance and even instability in time delay uncertain networ... Active queue management(AQM) is essentially a router buffer management strategy supporting TCP congestion control.Since existing AQM schemes exhibit poor performance and even instability in time delay uncertain networks,a robust buffer management(RBM) mechanism is proposed to guarantee the quality of service(QoS).RBM consists of a Smith predictor and two independent controllers.The Smith predictor is used to compensate for the round trip time(RTT) delay and to restrain its negative influence on network performance.The main feedback controller and the disturbance rejection controller are designed as proportional-integral (PI) controller and proportional(P) controller by internal model control(IMC) and frequency-domain analysis respectively.By simulation experiments in Netwrok-Simulator-2(NS2),it is demonstrated that RBM can effectively manage the buffer occupation around the target value against time delay and system disturbance. Compared with delay compensation-AQM algorithm(DC-AQM),proportional-integral-derivative(PID) algorithm and random exponential marking(REM) algorithm,the RBM scheme exhibits the superiority in terms of stability, responsiveness and robustness. 展开更多
关键词 congestion control buffer management active queue management(AQM) Smith predictor quality of service(QoS)
原文传递
A Dynamic Resource-Aware Routing Protocol in Resource-Constrained Opportunistic Networks
7
作者 Aref Hassan Kurd Ali Halikul Lenando +2 位作者 Slim Chaoui Mohamad Alrfaay Medhat A.Tawfeek 《Computers, Materials & Continua》 SCIE EI 2022年第2期4147-4167,共21页
Recently,Opportunistic Networks(OppNets)are considered to be one of the most attractive developments of Mobile Ad Hoc Networks that have arisen thanks to the development of intelligent devices.OppNets are characterize... Recently,Opportunistic Networks(OppNets)are considered to be one of the most attractive developments of Mobile Ad Hoc Networks that have arisen thanks to the development of intelligent devices.OppNets are characterized by a rough and dynamic topology as well as unpredictable contacts and contact times.Data is forwarded and stored in intermediate nodes until the next opportunity occurs.Therefore,achieving a high delivery ratio in OppNets is a challenging issue.It is imperative that any routing protocol use network resources,as far as they are available,in order to achieve higher network performance.In this article,we introduce the Resource-Aware Routing(ReAR)protocol which dynamically controls the buffer usage with the aim of balancing the load in resource-constrained,stateless and non-social OppNets.The ReAR protocol invokes our recently introduced mutual informationbased weighting approach to estimate the impact of the buffer size on the network performance and ultimately to regulate the buffer consumption in real time.The proposed routing protocol is proofed conceptually and simulated using the Opportunistic Network Environment simulator.Experiments show that the ReAR protocol outperforms a set of well-known routing protocols such as EBR,Epidemic MaxProp,energy-aware Spray and Wait and energy-aware PRoPHETin terms of message delivery ratio and overhead ratio. 展开更多
关键词 Opportunistic networks mobile ad hoc networks routing protocols resource-constrained networks load balancing buffer management
下载PDF
A Unified Buffering Management with Set Divisible Cache for PCM Main Memory
8
作者 Mei-Ying Bian Su-Kyung Yoon Jeong-Geun Kim Sangjae Nam Shin-Dug Kim 《Journal of Computer Science & Technology》 SCIE EI CSCD 2016年第1期137-146,共10页
This research proposes a phase-change memory (PCM) based main memory system with an effective combi- nation of a superblock-based adaptive buffering structure and its associated set divisible last-level cache (LLC... This research proposes a phase-change memory (PCM) based main memory system with an effective combi- nation of a superblock-based adaptive buffering structure and its associated set divisible last-level cache (LLC). To achieve high performance similar to that of dynamic random-access memory (DRAM) based main memory, the superblock-based adaptive buffer (SABU) is comprised of dual DRAM buffers, i.e., an aggressive superblock-based pre-fetching buffer (SBPB) and an adaptive sub-block reusing buffer (SBRB), and a set divisible LLC based on a cache space optimization scheme. According to our experiment, the longer PCM access latency can typically be hidden using our proposed SABU, which can significantly reduce the number of writes over the PCM main memory by 26.44%. The SABU approach can reduce PCM access latency up to 0.43 times, compared with conventional DRAM main memory. Meanwhile, the average memory energy consumption can be reduced by 19.7%. 展开更多
关键词 memory hierarchy memory structure cache memory buffer management
原文传递
Prober: exploiting sequential characteristics in buffer for improving SSDs write performance
9
作者 Wen ZHOU Dan FENG +4 位作者 Vu HUA Jingning LIU Fangting HUANG Yu CHEN Shuangwu ZHANG 《Frontiers of Computer Science》 SCIE EI CSCD 2016年第5期951-964,共14页
Solid state disks (SSDs) are becoming one of the mainstream storage devices due to their salient features, such as high read performance and low power consump- tion. In order to obtain high write performance and ext... Solid state disks (SSDs) are becoming one of the mainstream storage devices due to their salient features, such as high read performance and low power consump- tion. In order to obtain high write performance and extend flash lifespan, SSDs leverage an internal DRAM to buffer frequently rewritten data to reduce the number of program operations upon the flash. However, existing buffer manage- ment algorithms demonstrate their blank in leveraging data access features to predict data attributes. In various real-world workloads, most of large sequential write requests are rarely rewritten in near future. Once these write requests occur, many hot data will be evicted from DRAM into flash mem- ory, thus jeopardizing the overall system performance. In order to address this problem, we propose a novel large write data identification scheme, called Prober. This scheme probes large sequential write sequences among the write streams at early stage to prevent them from residing in the buffer. In the meantime, to further release space and reduce waiting time for handling the incoming requests, we temporarily buffer the large data into DRAM when the buffer has free space, and leverage an actively write-back scheme for large sequential write data when the flash array turns into idle state. Experi- mental results demonstrate that our schemes improve hit ratio of write requests by up to 10%, decrease the average response time by up to 42% and reduce the number of erase opera- tions by up to 11%, compared with the state-of-the-art buffer replacement algorithms. 展开更多
关键词 SSDs storage system buffer management se-quential write requests
原文传递
Striping and Scheduling for Large Scale Multimedia Servers
10
作者 Kyung-OhLee Jun-HoPark Yoon-YoungPark 《Journal of Computer Science & Technology》 SCIE EI CSCD 2004年第6期885-895,共11页
When designing a multimedia server, several things must be decided: which scheduling scheme to adopt, how to allocate multimedia objects on storage devices, and the round length with which the streams will be serviced... When designing a multimedia server, several things must be decided: which scheduling scheme to adopt, how to allocate multimedia objects on storage devices, and the round length with which the streams will be serviced. Several problems in the designing of large-scale multimedia servers are addressed, with the following contributions: (1) a striping scheme is proposed that minimizes the number of seeks and hence maximizes the performance; (2) a simple and efficient mechanism is presented to find the optimal striping unit size as well as the optimal round length, which exploits both the characteristics of VBR streams and the situation of resources in the system; and (3) the characteristics and resource requirements of several scheduling schemes are investigated in order to obtain a clear indication as to which scheme shows the best performance in realtime multimedia servicing. Based on our analysis and experimental results, the CSCAN scheme outperforms the other schemes. It is believed that the results are of value in the design of effective large-scale multimedia servers. Keywords realtime multimedia - storage server - scheduling - data placement - buffer management - variable bit rate This work was supported in part by the University IT Research Center Project and Sunmoon University Research Project.Kyung-Oh Lee is an associate professor in the Faculty of Computer and Information Sciences, Sunmoon University, Korea. He received his B.S., M.S. and Ph.D. degrees in computer science from Seoul National University in 1989, 1994 and 1999, respectively. His current research interests include multimedia system, database, mobile communication. He is a member of KIPS (Korea Information Processing Society).Jungho-Ho Park is a professor in the Divisions of Computer and Information Sciences, Sunmoon University, Korea. He received his M.S. and Ph.D. degrees in computer science from Osaka University in 1987 and 1990, respectively. His current research interests include distributed algorithms, e-learning and electronic commerce. He is a director of KIPS (Korea Information Processing Society) and a vice president of KIPS-IT certification.Yoon-Young Park is an associate professor in the Faculty of Computer and Information Sciences, Sunmoon University, Korea. He received his M.S. and Ph.D. degrees in computer science from Seoul National University in 1985 and 1994, respectively. His current research interests include embedded systems and sensor networks. He is a member of KIPS (Korea Information Processing Society). 展开更多
关键词 realtime multimedia storage server SCHEDULING data placement buffer management variable bit rate
原文传递
Design and Implementation of an Interoperable Object Platform for Multi-Databases
11
作者 顾宁 许学标 施伯乐 《Journal of Computer Science & Technology》 SCIE EI CSCD 2000年第3期249-260,共12页
In this paper, the authors present the design and implementation of an Interoperable Object Platform for Multi-Databases (IOPMD). The aim of the system is to provide a uniform object view and a set of tools for object... In this paper, the authors present the design and implementation of an Interoperable Object Platform for Multi-Databases (IOPMD). The aim of the system is to provide a uniform object view and a set of tools for object manipu lation and query based on heterogeneous multiple data sources under client/server environment. The common object model is compatible with ODMG2.0 and OMG'sCORBA, which provides main OO features such as OID, attribute, method, inheri tance, reference, etc. Three types of interfaCes, namely Vface, IOQL and C++ API, are given to provide the database programmer with tools and functionalities for application development. Nested transactions and compensating technology are adopted in transaction manager. In discussing some key 'implementation techlliques, translation and mapping approaches from various schemata to a common object schemaare proposed. Buffer management provides the data caching policy and consistency maintenance of cached data. Version managemellt presellts some operations based on the definitions in semantic version model, and introduces the implemelltation of the semantic version graph. 展开更多
关键词 client/server architecture common object model transaction management buffer management schema translation version management
原文传递
上一页 1 下一页 到第
使用帮助 返回顶部