Aiming at the fact that traditional cache replacement strategy lacks pertinence to the semantic cache in the process of extensible markup language (XML) algebra query, a replacement strategy based on the semantic ca...Aiming at the fact that traditional cache replacement strategy lacks pertinence to the semantic cache in the process of extensible markup language (XML) algebra query, a replacement strategy based on the semantic cache contribution value is proposed. First, pattern matching rules for XML algebra query and semantic caches are given. Second, the method of calculating the semantic cache contribution value is proposed. In XML documents with four different sizes, the experimental results of time efficiency show that this strategy supports environment of the XML algebra query and it has better time efficiency than both least frequency used (LFU) and least recently used (LRU).展开更多
Spark,a distributed computing platform,has rapidly developed in the field of big data.Its in-memory computing feature reduces disk read overhead and shortens data processing time,making it have broad application prosp...Spark,a distributed computing platform,has rapidly developed in the field of big data.Its in-memory computing feature reduces disk read overhead and shortens data processing time,making it have broad application prospects in large-scale computing applications such as machine learning and image processing.However,the performance of the Spark platform still needs to be improved.When a large number of tasks are processed simultaneously,Spark’s cache replacementmechanismcannot identify high-value data partitions,resulting inmemory resources not being fully utilized and affecting the performance of the Spark platform.To address the problem that Spark’s default cache replacement algorithm cannot accurately evaluate high-value data partitions,firstly the weight influence factors of data partitions are modeled and evaluated.Then,based on this weighted model,a cache replacement algorithm based on dynamic weighted data value is proposed,which takes into account hit rate and data difference.Better integration and usage strategies are implemented based on LRU(LeastRecentlyUsed).Theweight update algorithm updates the weight value when the data partition information changes,accurately measuring the importance of the partition in the current job;the cache removal algorithm clears partitions without useful values in the cache to releasememory resources;the weight replacement algorithm combines partition weights and partition information to replace RDD partitions when memory remaining space is insufficient.Finally,by setting up a Spark cluster environment,the algorithm proposed in this paper is experimentally verified.Experiments have shown that this algorithmcan effectively improve cache hit rate,enhance the performance of the platform,and reduce job execution time by 7.61%compared to existing improved algorithms.展开更多
Due to the explosion of network data traffic and IoT devices,edge servers are overloaded and slow to respond to the massive volume of online requests.A large number of studies have shown that edge caching can solve th...Due to the explosion of network data traffic and IoT devices,edge servers are overloaded and slow to respond to the massive volume of online requests.A large number of studies have shown that edge caching can solve this problem effectively.This paper proposes a distributed edge collaborative caching mechanism for Internet online request services scenario.It solves the problem of large average access delay caused by unbalanced load of edge servers,meets users’differentiated service demands and improves user experience.In particular,the edge cache node selection algorithm is optimized,and a novel edge cache replacement strategy considering the differentiated user requests is proposed.This mechanism can shorten the response time to a large number of user requests.Experimental results show that,compared with the current advanced online edge caching algorithm,the proposed edge collaborative caching strategy in this paper can reduce the average response delay by 9%.It also increases the user utility by 4.5 times in differentiated service scenarios,and significantly reduces the time complexity of the edge caching algorithm.展开更多
Mitochondria are essential cellular organelles critical for generating adenosine triphosphate for cellular homeostasis, as well as various mechanisms that can lead to both necrosis and apoptosis. The field of "mi- to...Mitochondria are essential cellular organelles critical for generating adenosine triphosphate for cellular homeostasis, as well as various mechanisms that can lead to both necrosis and apoptosis. The field of "mi- tochondrial medicine" is emerging in which injury/disease states are targeted therapeutically at the level of the mitochondrion, including specific antioxidants, bioenergetic substrate additions, and membrane uncoupling agents. Consequently, novel mitochondrial transplantation strategies represent a potentially multifactorial therapy leading to increased adenosine triphosphate production, decreased oxidative stress, mitochondrial DNA replacement, improved bioenergetics and tissue sparing. Herein, we describe briefly the history of mitochondrial transplantation and the various techniques used for both in vitro and in vivo delivery, the benefits associated with successful transference into both peripheral and central nervous system tissues, along with caveats and pitfalls that hinder the advancements of this novel therapeutic.展开更多
The hit rate, a major metric for evaluating proxy caches, is mostly limited by the replacement strategy of proxy caches. However, in traditional proxy caches, the hit rate does not usually successfully predict how w...The hit rate, a major metric for evaluating proxy caches, is mostly limited by the replacement strategy of proxy caches. However, in traditional proxy caches, the hit rate does not usually successfully predict how well a proxy cache will perform because the proxy cache counts any hit in its caching space which has many pages without useful information, so its replacement strategy fails to determine which pages to keep and which to release. The proxy cache efficiency can be measured more accurately using the valid hit rate introduced in this paper. An efficient replacement strategy based on the Site Graph model for WWW (World Wide Web) documents is also discussed in this paper. The model analyzes user access behavior as a basis for the replacement strategy. Simulation results demonstrate that the replacement strategy improves proxy cache efficiency.展开更多
Job-shop scheduling problem (JSP) is a typical NP-hard combinatorial optimization problem and has a broad background for engineering application. Nowadays, the effective approach for JSP is a hot topic in related re...Job-shop scheduling problem (JSP) is a typical NP-hard combinatorial optimization problem and has a broad background for engineering application. Nowadays, the effective approach for JSP is a hot topic in related research area of manufacturing system. However, some JSPs, even for moderate size instances, are very difficult to find an optimal solution within a reasonable time because of the process constraints and the complex large solution space. In this paper, an adaptive multi-population genetic algorithm (AMGA) has been proposed to solve this prob- lem. Firstly, using multi-populations and adaptive cross- over probability can enlarge search scope and improve search performance. Secondly, using adaptive mutation probability and elite replacing mechanism can accelerate convergence speed. The approach is tested for some clas- sical benchmark JSPs taken from the literature and com- pared with some other approaches. The computational results show that the proposed AMGA can produce optimal or near-optimal values on almost all tested benchmark instances. Therefore, we can believe that AMGA can be considered as an effective method for solving JSP.展开更多
基金Supported by the National Natural Science Foundation of China(60803160 and 61272110)the Key Projects of National Social Science Foundation of China(11&ZD189)+3 种基金the Natural Science Foundation of Hubei Province(2013CFB334)the Natural Science Foundation of Educational Agency of Hubei Province(Q20101110)the State Key Lab of Software Engineering Open Foundation of Wuhan University(SKLSE2012-09-07)the Wuhan Key Technology Support Program(2013010602010216)
文摘Aiming at the fact that traditional cache replacement strategy lacks pertinence to the semantic cache in the process of extensible markup language (XML) algebra query, a replacement strategy based on the semantic cache contribution value is proposed. First, pattern matching rules for XML algebra query and semantic caches are given. Second, the method of calculating the semantic cache contribution value is proposed. In XML documents with four different sizes, the experimental results of time efficiency show that this strategy supports environment of the XML algebra query and it has better time efficiency than both least frequency used (LFU) and least recently used (LRU).
基金the National Natural Science Foundation of China(61872284)Key Research and Development Program of Shaanxi(2023-YBGY-203,2023-YBGY-021)+3 种基金Industrialization Project of Shaanxi ProvincialDepartment of Education(21JC017)“Thirteenth Five-Year”National Key R&D Program Project(Project Number:2019YFD1100901)Natural Science Foundation of Shannxi Province,China(2021JLM-16,2023-JC-YB-825)Key R&D Plan of Xianyang City(L2023-ZDYF-QYCX-021)。
文摘Spark,a distributed computing platform,has rapidly developed in the field of big data.Its in-memory computing feature reduces disk read overhead and shortens data processing time,making it have broad application prospects in large-scale computing applications such as machine learning and image processing.However,the performance of the Spark platform still needs to be improved.When a large number of tasks are processed simultaneously,Spark’s cache replacementmechanismcannot identify high-value data partitions,resulting inmemory resources not being fully utilized and affecting the performance of the Spark platform.To address the problem that Spark’s default cache replacement algorithm cannot accurately evaluate high-value data partitions,firstly the weight influence factors of data partitions are modeled and evaluated.Then,based on this weighted model,a cache replacement algorithm based on dynamic weighted data value is proposed,which takes into account hit rate and data difference.Better integration and usage strategies are implemented based on LRU(LeastRecentlyUsed).Theweight update algorithm updates the weight value when the data partition information changes,accurately measuring the importance of the partition in the current job;the cache removal algorithm clears partitions without useful values in the cache to releasememory resources;the weight replacement algorithm combines partition weights and partition information to replace RDD partitions when memory remaining space is insufficient.Finally,by setting up a Spark cluster environment,the algorithm proposed in this paper is experimentally verified.Experiments have shown that this algorithmcan effectively improve cache hit rate,enhance the performance of the platform,and reduce job execution time by 7.61%compared to existing improved algorithms.
基金This work is supported by the National Natural Science Foundation of China(62072465)the Key-Area Research and Development Program of Guang Dong Province(2019B010107001).
文摘Due to the explosion of network data traffic and IoT devices,edge servers are overloaded and slow to respond to the massive volume of online requests.A large number of studies have shown that edge caching can solve this problem effectively.This paper proposes a distributed edge collaborative caching mechanism for Internet online request services scenario.It solves the problem of large average access delay caused by unbalanced load of edge servers,meets users’differentiated service demands and improves user experience.In particular,the edge cache node selection algorithm is optimized,and a novel edge cache replacement strategy considering the differentiated user requests is proposed.This mechanism can shorten the response time to a large number of user requests.Experimental results show that,compared with the current advanced online edge caching algorithm,the proposed edge collaborative caching strategy in this paper can reduce the average response delay by 9%.It also increases the user utility by 4.5 times in differentiated service scenarios,and significantly reduces the time complexity of the edge caching algorithm.
基金funded by NIH R21NS096670(AGR)University of Kentucky Spinal Cord and Brain Injury Research Center Chair Endowment(AGR),NIH/NINDS 2P30NS051220
文摘Mitochondria are essential cellular organelles critical for generating adenosine triphosphate for cellular homeostasis, as well as various mechanisms that can lead to both necrosis and apoptosis. The field of "mi- tochondrial medicine" is emerging in which injury/disease states are targeted therapeutically at the level of the mitochondrion, including specific antioxidants, bioenergetic substrate additions, and membrane uncoupling agents. Consequently, novel mitochondrial transplantation strategies represent a potentially multifactorial therapy leading to increased adenosine triphosphate production, decreased oxidative stress, mitochondrial DNA replacement, improved bioenergetics and tissue sparing. Herein, we describe briefly the history of mitochondrial transplantation and the various techniques used for both in vitro and in vivo delivery, the benefits associated with successful transference into both peripheral and central nervous system tissues, along with caveats and pitfalls that hinder the advancements of this novel therapeutic.
基金the State High- Tech Developments Planof China (No.86 3- 30 6 - ZT0 1- 0 3- 1) IBM China Research Lab Huawei Enterprise Funding onScience and Technology
文摘The hit rate, a major metric for evaluating proxy caches, is mostly limited by the replacement strategy of proxy caches. However, in traditional proxy caches, the hit rate does not usually successfully predict how well a proxy cache will perform because the proxy cache counts any hit in its caching space which has many pages without useful information, so its replacement strategy fails to determine which pages to keep and which to release. The proxy cache efficiency can be measured more accurately using the valid hit rate introduced in this paper. An efficient replacement strategy based on the Site Graph model for WWW (World Wide Web) documents is also discussed in this paper. The model analyzes user access behavior as a basis for the replacement strategy. Simulation results demonstrate that the replacement strategy improves proxy cache efficiency.
文摘Job-shop scheduling problem (JSP) is a typical NP-hard combinatorial optimization problem and has a broad background for engineering application. Nowadays, the effective approach for JSP is a hot topic in related research area of manufacturing system. However, some JSPs, even for moderate size instances, are very difficult to find an optimal solution within a reasonable time because of the process constraints and the complex large solution space. In this paper, an adaptive multi-population genetic algorithm (AMGA) has been proposed to solve this prob- lem. Firstly, using multi-populations and adaptive cross- over probability can enlarge search scope and improve search performance. Secondly, using adaptive mutation probability and elite replacing mechanism can accelerate convergence speed. The approach is tested for some clas- sical benchmark JSPs taken from the literature and com- pared with some other approaches. The computational results show that the proposed AMGA can produce optimal or near-optimal values on almost all tested benchmark instances. Therefore, we can believe that AMGA can be considered as an effective method for solving JSP.