Delay Tolerant Networks(DTNs)have the major problem of message delay in the network due to a lack of endto-end connectivity between the nodes,especially when the nodes are mobile.The nodes in DTNs have limited buffer ...Delay Tolerant Networks(DTNs)have the major problem of message delay in the network due to a lack of endto-end connectivity between the nodes,especially when the nodes are mobile.The nodes in DTNs have limited buffer storage for storing delayed messages.This instantaneous sharing of data creates a low buffer/shortage problem.Consequently,buffer congestion would occur and there would be no more space available in the buffer for the upcoming messages.To address this problem a buffer management policy is proposed named“A Novel and Proficient Buffer Management Technique(NPBMT)for the Internet of Vehicle-Based DTNs”.NPBMT combines appropriate-size messages with the lowest Time-to-Live(TTL)and then drops a combination of the appropriate messages to accommodate the newly arrived messages.To evaluate the performance of the proposed technique comparison is done with Drop Oldest(DOL),Size Aware Drop(SAD),and Drop Larges(DLA).The proposed technique is implemented in the Opportunistic Network Environment(ONE)simulator.The shortest path mapbased movement model has been used as the movement path model for the nodes with the epidemic routing protocol.From the simulation results,a significant change has been observed in the delivery probability as the proposed policy delivered 380 messages,DOL delivered 186 messages,SAD delivered 190 messages,and DLA delivered only 95 messages.A significant decrease has been observed in the overhead ratio,as the SAD overhead ratio is 324.37,DLA overhead ratio is 266.74,and DOL and NPBMT overhead ratios are 141.89 and 52.85,respectively,which reveals a significant reduction of overhead ratio in NPBMT as compared to existing policies.The network latency average of DOL is 7785.5,DLA is 5898.42,and SAD is 5789.43 whereas the NPBMT latency average is 3909.4.This reveals that the proposed policy keeps the messages for a short time in the network,which reduces the overhead ratio.展开更多
In opportunistic networks, most existing buffer management policies including scheduling and passive dropping policies are mainly for routing protocols. In this paper, we proposed a Utility-based Buffer Management str...In opportunistic networks, most existing buffer management policies including scheduling and passive dropping policies are mainly for routing protocols. In this paper, we proposed a Utility-based Buffer Management strategy(UBM) for data dissemination in opportunistic networks. In UBM, we first design a method of computing the utility values of caching messages according to the interest of nodes and the delivery probability of messages, and then propose an overall buffer management policy based on the utility. UBM driven by receivers completely implements not only caching policies, passive and proactive dropping policies, but also scheduling policies of senders. Simulation results show that, compared with some classical dropping strategies, UBM can obtain higher delivery ratio and lower delay latency by using smaller network cost.展开更多
ECFD (erroneous cell tail drop), a buffer management optimization strategy is suggested which can improve the utilization of buffer resources in satellite ATM (asynchronous transfer mode) networks. The strategy, i...ECFD (erroneous cell tail drop), a buffer management optimization strategy is suggested which can improve the utilization of buffer resources in satellite ATM (asynchronous transfer mode) networks. The strategy, in which erroneous cells caused by satellite channel and the following cells that belong to the same PDU (protocol data Unit) are discarded, concerns non-real-time data services that use higher layer protocol for retransmission. Based on EPD (early packet drop) policy, mathematical models are established with and without ECTD. The numerical results show that ECTD would optimize buffer management and improve effective throughput (goodput), and the increment of goodput is relative to the CER (cell error ratio) and the PDU length. The higher their values are, the greater the increment. For example, when the average PDU length values are 30 and 90, the improvement of goodput are respectively about 4% and 10%.展开更多
For improving Transfer Control Protocol (TCP) performance in mobile environment,smooth handover with buffer management has been proposed to realize seamless handovers. However in our simulation, even if smooth handove...For improving Transfer Control Protocol (TCP) performance in mobile environment,smooth handover with buffer management has been proposed to realize seamless handovers. However in our simulation, even if smooth handover in Mobile IPv6 (MIPv6) is implemented, TCP can not always achieve better performance due to packets forwarding burst. Based on the study of buffer management for smooth handover, this paper proposes an enhanced buffer management scheme for smooth handover to improve TCP performance. In this scheme, a packet-pair probing technology is adopted to estimate the available bandwidth of the new path from Previous router (Prtr) to Mobile Node (MN), which will be used by Prtr to control the buffered packets forwarding. The simulation results demonstrate that smooth handover with this scheme can achieve better TCP performance than the original scheme.展开更多
Recently,Opportunistic Networks(OppNets)are considered to be one of the most attractive developments of Mobile Ad Hoc Networks that have arisen thanks to the development of intelligent devices.OppNets are characterize...Recently,Opportunistic Networks(OppNets)are considered to be one of the most attractive developments of Mobile Ad Hoc Networks that have arisen thanks to the development of intelligent devices.OppNets are characterized by a rough and dynamic topology as well as unpredictable contacts and contact times.Data is forwarded and stored in intermediate nodes until the next opportunity occurs.Therefore,achieving a high delivery ratio in OppNets is a challenging issue.It is imperative that any routing protocol use network resources,as far as they are available,in order to achieve higher network performance.In this article,we introduce the Resource-Aware Routing(ReAR)protocol which dynamically controls the buffer usage with the aim of balancing the load in resource-constrained,stateless and non-social OppNets.The ReAR protocol invokes our recently introduced mutual informationbased weighting approach to estimate the impact of the buffer size on the network performance and ultimately to regulate the buffer consumption in real time.The proposed routing protocol is proofed conceptually and simulated using the Opportunistic Network Environment simulator.Experiments show that the ReAR protocol outperforms a set of well-known routing protocols such as EBR,Epidemic MaxProp,energy-aware Spray and Wait and energy-aware PRoPHETin terms of message delivery ratio and overhead ratio.展开更多
Flash solid-state drives (SSDs) provide much faster access to data compared with traditional hard disk drives (HDDs). The current price and performance of SSD suggest it can be adopted as a data buffer between mai...Flash solid-state drives (SSDs) provide much faster access to data compared with traditional hard disk drives (HDDs). The current price and performance of SSD suggest it can be adopted as a data buffer between main memory and HDD, and buffer management policy in such hybrid systems has attracted more and more interest from research community recently. In this paper, we propose a novel approach to manage the buffer in flash-based hybrid storage systems, named hotness aware hit (HAT). HAT exploits a page reference queue to record the access history as well as the status of accessed pages, i.e., hot, warm, and cold. Additionally, the page reference queue is further split into hot and warm regions which correspond to the memory and flash in general. The HAT approach updates the page status and deals with the page migration in the memory hierarchy according to the current page status and hit position in the page reference queue. Compared with the existing hybrid storage approaches, the proposed HAT can manage the memory and flash cache layers more effectively. Our empirical evaluation on benchmark traces demonstrates the superiority of the proposed strategy against the state-of-the-art competitors.展开更多
Active queue management(AQM) is essentially a router buffer management strategy supporting TCP congestion control.Since existing AQM schemes exhibit poor performance and even instability in time delay uncertain networ...Active queue management(AQM) is essentially a router buffer management strategy supporting TCP congestion control.Since existing AQM schemes exhibit poor performance and even instability in time delay uncertain networks,a robust buffer management(RBM) mechanism is proposed to guarantee the quality of service(QoS).RBM consists of a Smith predictor and two independent controllers.The Smith predictor is used to compensate for the round trip time(RTT) delay and to restrain its negative influence on network performance.The main feedback controller and the disturbance rejection controller are designed as proportional-integral (PI) controller and proportional(P) controller by internal model control(IMC) and frequency-domain analysis respectively.By simulation experiments in Netwrok-Simulator-2(NS2),it is demonstrated that RBM can effectively manage the buffer occupation around the target value against time delay and system disturbance. Compared with delay compensation-AQM algorithm(DC-AQM),proportional-integral-derivative(PID) algorithm and random exponential marking(REM) algorithm,the RBM scheme exhibits the superiority in terms of stability, responsiveness and robustness.展开更多
Solid state disks (SSDs) are becoming one of the mainstream storage devices due to their salient features, such as high read performance and low power consump- tion. In order to obtain high write performance and ext...Solid state disks (SSDs) are becoming one of the mainstream storage devices due to their salient features, such as high read performance and low power consump- tion. In order to obtain high write performance and extend flash lifespan, SSDs leverage an internal DRAM to buffer frequently rewritten data to reduce the number of program operations upon the flash. However, existing buffer manage- ment algorithms demonstrate their blank in leveraging data access features to predict data attributes. In various real-world workloads, most of large sequential write requests are rarely rewritten in near future. Once these write requests occur, many hot data will be evicted from DRAM into flash mem- ory, thus jeopardizing the overall system performance. In order to address this problem, we propose a novel large write data identification scheme, called Prober. This scheme probes large sequential write sequences among the write streams at early stage to prevent them from residing in the buffer. In the meantime, to further release space and reduce waiting time for handling the incoming requests, we temporarily buffer the large data into DRAM when the buffer has free space, and leverage an actively write-back scheme for large sequential write data when the flash array turns into idle state. Experi- mental results demonstrate that our schemes improve hit ratio of write requests by up to 10%, decrease the average response time by up to 42% and reduce the number of erase opera- tions by up to 11%, compared with the state-of-the-art buffer replacement algorithms.展开更多
This research proposes a phase-change memory (PCM) based main memory system with an effective combi- nation of a superblock-based adaptive buffering structure and its associated set divisible last-level cache (LLC...This research proposes a phase-change memory (PCM) based main memory system with an effective combi- nation of a superblock-based adaptive buffering structure and its associated set divisible last-level cache (LLC). To achieve high performance similar to that of dynamic random-access memory (DRAM) based main memory, the superblock-based adaptive buffer (SABU) is comprised of dual DRAM buffers, i.e., an aggressive superblock-based pre-fetching buffer (SBPB) and an adaptive sub-block reusing buffer (SBRB), and a set divisible LLC based on a cache space optimization scheme. According to our experiment, the longer PCM access latency can typically be hidden using our proposed SABU, which can significantly reduce the number of writes over the PCM main memory by 26.44%. The SABU approach can reduce PCM access latency up to 0.43 times, compared with conventional DRAM main memory. Meanwhile, the average memory energy consumption can be reduced by 19.7%.展开更多
When designing a multimedia server, several things must be decided: which scheduling scheme to adopt, how to allocate multimedia objects on storage devices, and the round length with which the streams will be serviced...When designing a multimedia server, several things must be decided: which scheduling scheme to adopt, how to allocate multimedia objects on storage devices, and the round length with which the streams will be serviced. Several problems in the designing of large-scale multimedia servers are addressed, with the following contributions: (1) a striping scheme is proposed that minimizes the number of seeks and hence maximizes the performance; (2) a simple and efficient mechanism is presented to find the optimal striping unit size as well as the optimal round length, which exploits both the characteristics of VBR streams and the situation of resources in the system; and (3) the characteristics and resource requirements of several scheduling schemes are investigated in order to obtain a clear indication as to which scheme shows the best performance in realtime multimedia servicing. Based on our analysis and experimental results, the CSCAN scheme outperforms the other schemes. It is believed that the results are of value in the design of effective large-scale multimedia servers. Keywords realtime multimedia - storage server - scheduling - data placement - buffer management - variable bit rate This work was supported in part by the University IT Research Center Project and Sunmoon University Research Project.Kyung-Oh Lee is an associate professor in the Faculty of Computer and Information Sciences, Sunmoon University, Korea. He received his B.S., M.S. and Ph.D. degrees in computer science from Seoul National University in 1989, 1994 and 1999, respectively. His current research interests include multimedia system, database, mobile communication. He is a member of KIPS (Korea Information Processing Society).Jungho-Ho Park is a professor in the Divisions of Computer and Information Sciences, Sunmoon University, Korea. He received his M.S. and Ph.D. degrees in computer science from Osaka University in 1987 and 1990, respectively. His current research interests include distributed algorithms, e-learning and electronic commerce. He is a director of KIPS (Korea Information Processing Society) and a vice president of KIPS-IT certification.Yoon-Young Park is an associate professor in the Faculty of Computer and Information Sciences, Sunmoon University, Korea. He received his M.S. and Ph.D. degrees in computer science from Seoul National University in 1985 and 1994, respectively. His current research interests include embedded systems and sensor networks. He is a member of KIPS (Korea Information Processing Society).展开更多
In this paper, the authors present the design and implementation of an Interoperable Object Platform for Multi-Databases (IOPMD). The aim of the system is to provide a uniform object view and a set of tools for object...In this paper, the authors present the design and implementation of an Interoperable Object Platform for Multi-Databases (IOPMD). The aim of the system is to provide a uniform object view and a set of tools for object manipu lation and query based on heterogeneous multiple data sources under client/server environment. The common object model is compatible with ODMG2.0 and OMG'sCORBA, which provides main OO features such as OID, attribute, method, inheri tance, reference, etc. Three types of interfaCes, namely Vface, IOQL and C++ API, are given to provide the database programmer with tools and functionalities for application development. Nested transactions and compensating technology are adopted in transaction manager. In discussing some key 'implementation techlliques, translation and mapping approaches from various schemata to a common object schemaare proposed. Buffer management provides the data caching policy and consistency maintenance of cached data. Version managemellt presellts some operations based on the definitions in semantic version model, and introduces the implemelltation of the semantic version graph.展开更多
基金funded by Researchers Supporting Project Number(RSPD2023R947),King Saud University,Riyadh,Saudi Arabia.
文摘Delay Tolerant Networks(DTNs)have the major problem of message delay in the network due to a lack of endto-end connectivity between the nodes,especially when the nodes are mobile.The nodes in DTNs have limited buffer storage for storing delayed messages.This instantaneous sharing of data creates a low buffer/shortage problem.Consequently,buffer congestion would occur and there would be no more space available in the buffer for the upcoming messages.To address this problem a buffer management policy is proposed named“A Novel and Proficient Buffer Management Technique(NPBMT)for the Internet of Vehicle-Based DTNs”.NPBMT combines appropriate-size messages with the lowest Time-to-Live(TTL)and then drops a combination of the appropriate messages to accommodate the newly arrived messages.To evaluate the performance of the proposed technique comparison is done with Drop Oldest(DOL),Size Aware Drop(SAD),and Drop Larges(DLA).The proposed technique is implemented in the Opportunistic Network Environment(ONE)simulator.The shortest path mapbased movement model has been used as the movement path model for the nodes with the epidemic routing protocol.From the simulation results,a significant change has been observed in the delivery probability as the proposed policy delivered 380 messages,DOL delivered 186 messages,SAD delivered 190 messages,and DLA delivered only 95 messages.A significant decrease has been observed in the overhead ratio,as the SAD overhead ratio is 324.37,DLA overhead ratio is 266.74,and DOL and NPBMT overhead ratios are 141.89 and 52.85,respectively,which reveals a significant reduction of overhead ratio in NPBMT as compared to existing policies.The network latency average of DOL is 7785.5,DLA is 5898.42,and SAD is 5789.43 whereas the NPBMT latency average is 3909.4.This reveals that the proposed policy keeps the messages for a short time in the network,which reduces the overhead ratio.
基金supported by the National Natural Science Fund of China under Grant No. 61472097the Education Ministry Doctoral Research Foundation of China (20132304110017)the International Exchange Program of Harbin Engineering University for Innovation-oriented Talents Cultivation
文摘In opportunistic networks, most existing buffer management policies including scheduling and passive dropping policies are mainly for routing protocols. In this paper, we proposed a Utility-based Buffer Management strategy(UBM) for data dissemination in opportunistic networks. In UBM, we first design a method of computing the utility values of caching messages according to the interest of nodes and the delivery probability of messages, and then propose an overall buffer management policy based on the utility. UBM driven by receivers completely implements not only caching policies, passive and proactive dropping policies, but also scheduling policies of senders. Simulation results show that, compared with some classical dropping strategies, UBM can obtain higher delivery ratio and lower delay latency by using smaller network cost.
文摘ECFD (erroneous cell tail drop), a buffer management optimization strategy is suggested which can improve the utilization of buffer resources in satellite ATM (asynchronous transfer mode) networks. The strategy, in which erroneous cells caused by satellite channel and the following cells that belong to the same PDU (protocol data Unit) are discarded, concerns non-real-time data services that use higher layer protocol for retransmission. Based on EPD (early packet drop) policy, mathematical models are established with and without ECTD. The numerical results show that ECTD would optimize buffer management and improve effective throughput (goodput), and the increment of goodput is relative to the CER (cell error ratio) and the PDU length. The higher their values are, the greater the increment. For example, when the average PDU length values are 30 and 90, the improvement of goodput are respectively about 4% and 10%.
文摘For improving Transfer Control Protocol (TCP) performance in mobile environment,smooth handover with buffer management has been proposed to realize seamless handovers. However in our simulation, even if smooth handover in Mobile IPv6 (MIPv6) is implemented, TCP can not always achieve better performance due to packets forwarding burst. Based on the study of buffer management for smooth handover, this paper proposes an enhanced buffer management scheme for smooth handover to improve TCP performance. In this scheme, a packet-pair probing technology is adopted to estimate the available bandwidth of the new path from Previous router (Prtr) to Mobile Node (MN), which will be used by Prtr to control the buffered packets forwarding. The simulation results demonstrate that smooth handover with this scheme can achieve better TCP performance than the original scheme.
文摘Recently,Opportunistic Networks(OppNets)are considered to be one of the most attractive developments of Mobile Ad Hoc Networks that have arisen thanks to the development of intelligent devices.OppNets are characterized by a rough and dynamic topology as well as unpredictable contacts and contact times.Data is forwarded and stored in intermediate nodes until the next opportunity occurs.Therefore,achieving a high delivery ratio in OppNets is a challenging issue.It is imperative that any routing protocol use network resources,as far as they are available,in order to achieve higher network performance.In this article,we introduce the Resource-Aware Routing(ReAR)protocol which dynamically controls the buffer usage with the aim of balancing the load in resource-constrained,stateless and non-social OppNets.The ReAR protocol invokes our recently introduced mutual informationbased weighting approach to estimate the impact of the buffer size on the network performance and ultimately to regulate the buffer consumption in real time.The proposed routing protocol is proofed conceptually and simulated using the Opportunistic Network Environment simulator.Experiments show that the ReAR protocol outperforms a set of well-known routing protocols such as EBR,Epidemic MaxProp,energy-aware Spray and Wait and energy-aware PRoPHETin terms of message delivery ratio and overhead ratio.
基金Acknowledgements This research was supported by the Nalional Natural Science Foundation of China (Grant No. 61272155) and Ministry of Industry and Information Technology (2010ZX01042-001-001-04).
文摘Flash solid-state drives (SSDs) provide much faster access to data compared with traditional hard disk drives (HDDs). The current price and performance of SSD suggest it can be adopted as a data buffer between main memory and HDD, and buffer management policy in such hybrid systems has attracted more and more interest from research community recently. In this paper, we propose a novel approach to manage the buffer in flash-based hybrid storage systems, named hotness aware hit (HAT). HAT exploits a page reference queue to record the access history as well as the status of accessed pages, i.e., hot, warm, and cold. Additionally, the page reference queue is further split into hot and warm regions which correspond to the memory and flash in general. The HAT approach updates the page status and deals with the page migration in the memory hierarchy according to the current page status and hit position in the page reference queue. Compared with the existing hybrid storage approaches, the proposed HAT can manage the memory and flash cache layers more effectively. Our empirical evaluation on benchmark traces demonstrates the superiority of the proposed strategy against the state-of-the-art competitors.
基金the National Natural Science Foundation of China(No.60574081)
文摘Active queue management(AQM) is essentially a router buffer management strategy supporting TCP congestion control.Since existing AQM schemes exhibit poor performance and even instability in time delay uncertain networks,a robust buffer management(RBM) mechanism is proposed to guarantee the quality of service(QoS).RBM consists of a Smith predictor and two independent controllers.The Smith predictor is used to compensate for the round trip time(RTT) delay and to restrain its negative influence on network performance.The main feedback controller and the disturbance rejection controller are designed as proportional-integral (PI) controller and proportional(P) controller by internal model control(IMC) and frequency-domain analysis respectively.By simulation experiments in Netwrok-Simulator-2(NS2),it is demonstrated that RBM can effectively manage the buffer occupation around the target value against time delay and system disturbance. Compared with delay compensation-AQM algorithm(DC-AQM),proportional-integral-derivative(PID) algorithm and random exponential marking(REM) algorithm,the RBM scheme exhibits the superiority in terms of stability, responsiveness and robustness.
文摘Solid state disks (SSDs) are becoming one of the mainstream storage devices due to their salient features, such as high read performance and low power consump- tion. In order to obtain high write performance and extend flash lifespan, SSDs leverage an internal DRAM to buffer frequently rewritten data to reduce the number of program operations upon the flash. However, existing buffer manage- ment algorithms demonstrate their blank in leveraging data access features to predict data attributes. In various real-world workloads, most of large sequential write requests are rarely rewritten in near future. Once these write requests occur, many hot data will be evicted from DRAM into flash mem- ory, thus jeopardizing the overall system performance. In order to address this problem, we propose a novel large write data identification scheme, called Prober. This scheme probes large sequential write sequences among the write streams at early stage to prevent them from residing in the buffer. In the meantime, to further release space and reduce waiting time for handling the incoming requests, we temporarily buffer the large data into DRAM when the buffer has free space, and leverage an actively write-back scheme for large sequential write data when the flash array turns into idle state. Experi- mental results demonstrate that our schemes improve hit ratio of write requests by up to 10%, decrease the average response time by up to 42% and reduce the number of erase opera- tions by up to 11%, compared with the state-of-the-art buffer replacement algorithms.
文摘This research proposes a phase-change memory (PCM) based main memory system with an effective combi- nation of a superblock-based adaptive buffering structure and its associated set divisible last-level cache (LLC). To achieve high performance similar to that of dynamic random-access memory (DRAM) based main memory, the superblock-based adaptive buffer (SABU) is comprised of dual DRAM buffers, i.e., an aggressive superblock-based pre-fetching buffer (SBPB) and an adaptive sub-block reusing buffer (SBRB), and a set divisible LLC based on a cache space optimization scheme. According to our experiment, the longer PCM access latency can typically be hidden using our proposed SABU, which can significantly reduce the number of writes over the PCM main memory by 26.44%. The SABU approach can reduce PCM access latency up to 0.43 times, compared with conventional DRAM main memory. Meanwhile, the average memory energy consumption can be reduced by 19.7%.
文摘When designing a multimedia server, several things must be decided: which scheduling scheme to adopt, how to allocate multimedia objects on storage devices, and the round length with which the streams will be serviced. Several problems in the designing of large-scale multimedia servers are addressed, with the following contributions: (1) a striping scheme is proposed that minimizes the number of seeks and hence maximizes the performance; (2) a simple and efficient mechanism is presented to find the optimal striping unit size as well as the optimal round length, which exploits both the characteristics of VBR streams and the situation of resources in the system; and (3) the characteristics and resource requirements of several scheduling schemes are investigated in order to obtain a clear indication as to which scheme shows the best performance in realtime multimedia servicing. Based on our analysis and experimental results, the CSCAN scheme outperforms the other schemes. It is believed that the results are of value in the design of effective large-scale multimedia servers. Keywords realtime multimedia - storage server - scheduling - data placement - buffer management - variable bit rate This work was supported in part by the University IT Research Center Project and Sunmoon University Research Project.Kyung-Oh Lee is an associate professor in the Faculty of Computer and Information Sciences, Sunmoon University, Korea. He received his B.S., M.S. and Ph.D. degrees in computer science from Seoul National University in 1989, 1994 and 1999, respectively. His current research interests include multimedia system, database, mobile communication. He is a member of KIPS (Korea Information Processing Society).Jungho-Ho Park is a professor in the Divisions of Computer and Information Sciences, Sunmoon University, Korea. He received his M.S. and Ph.D. degrees in computer science from Osaka University in 1987 and 1990, respectively. His current research interests include distributed algorithms, e-learning and electronic commerce. He is a director of KIPS (Korea Information Processing Society) and a vice president of KIPS-IT certification.Yoon-Young Park is an associate professor in the Faculty of Computer and Information Sciences, Sunmoon University, Korea. He received his M.S. and Ph.D. degrees in computer science from Seoul National University in 1985 and 1994, respectively. His current research interests include embedded systems and sensor networks. He is a member of KIPS (Korea Information Processing Society).
文摘In this paper, the authors present the design and implementation of an Interoperable Object Platform for Multi-Databases (IOPMD). The aim of the system is to provide a uniform object view and a set of tools for object manipu lation and query based on heterogeneous multiple data sources under client/server environment. The common object model is compatible with ODMG2.0 and OMG'sCORBA, which provides main OO features such as OID, attribute, method, inheri tance, reference, etc. Three types of interfaCes, namely Vface, IOQL and C++ API, are given to provide the database programmer with tools and functionalities for application development. Nested transactions and compensating technology are adopted in transaction manager. In discussing some key 'implementation techlliques, translation and mapping approaches from various schemata to a common object schemaare proposed. Buffer management provides the data caching policy and consistency maintenance of cached data. Version managemellt presellts some operations based on the definitions in semantic version model, and introduces the implemelltation of the semantic version graph.