在供给侧改革视域下,南京医科大学聚焦新时期“懂医精药”本科药学人才培养的要求,基于以学生为主体(I)这个中心,从科研促进教学(Scientific Research Promoting Teaching)、以患者为中心(Patient-centered Humanistic Quality Cultivat...在供给侧改革视域下,南京医科大学聚焦新时期“懂医精药”本科药学人才培养的要求,基于以学生为主体(I)这个中心,从科研促进教学(Scientific Research Promoting Teaching)、以患者为中心(Patient-centered Humanistic Quality Cultivation)、产教协同育人(Assurance System of Cooperation Education)、创新实训实践(Research and Training Bases)、药苑育人生态(Keen Sense of Patriotism and Responsibility)五方面探索并实践医学院校“I-SPARK”本科药学人才培养体系改革,取得了较好的实践效果。展开更多
Spark,a distributed computing platform,has rapidly developed in the field of big data.Its in-memory computing feature reduces disk read overhead and shortens data processing time,making it have broad application prosp...Spark,a distributed computing platform,has rapidly developed in the field of big data.Its in-memory computing feature reduces disk read overhead and shortens data processing time,making it have broad application prospects in large-scale computing applications such as machine learning and image processing.However,the performance of the Spark platform still needs to be improved.When a large number of tasks are processed simultaneously,Spark’s cache replacementmechanismcannot identify high-value data partitions,resulting inmemory resources not being fully utilized and affecting the performance of the Spark platform.To address the problem that Spark’s default cache replacement algorithm cannot accurately evaluate high-value data partitions,firstly the weight influence factors of data partitions are modeled and evaluated.Then,based on this weighted model,a cache replacement algorithm based on dynamic weighted data value is proposed,which takes into account hit rate and data difference.Better integration and usage strategies are implemented based on LRU(LeastRecentlyUsed).Theweight update algorithm updates the weight value when the data partition information changes,accurately measuring the importance of the partition in the current job;the cache removal algorithm clears partitions without useful values in the cache to releasememory resources;the weight replacement algorithm combines partition weights and partition information to replace RDD partitions when memory remaining space is insufficient.Finally,by setting up a Spark cluster environment,the algorithm proposed in this paper is experimentally verified.Experiments have shown that this algorithmcan effectively improve cache hit rate,enhance the performance of the platform,and reduce job execution time by 7.61%compared to existing improved algorithms.展开更多
文摘在供给侧改革视域下,南京医科大学聚焦新时期“懂医精药”本科药学人才培养的要求,基于以学生为主体(I)这个中心,从科研促进教学(Scientific Research Promoting Teaching)、以患者为中心(Patient-centered Humanistic Quality Cultivation)、产教协同育人(Assurance System of Cooperation Education)、创新实训实践(Research and Training Bases)、药苑育人生态(Keen Sense of Patriotism and Responsibility)五方面探索并实践医学院校“I-SPARK”本科药学人才培养体系改革,取得了较好的实践效果。
基金the National Natural Science Foundation of China(61872284)Key Research and Development Program of Shaanxi(2023-YBGY-203,2023-YBGY-021)+3 种基金Industrialization Project of Shaanxi ProvincialDepartment of Education(21JC017)“Thirteenth Five-Year”National Key R&D Program Project(Project Number:2019YFD1100901)Natural Science Foundation of Shannxi Province,China(2021JLM-16,2023-JC-YB-825)Key R&D Plan of Xianyang City(L2023-ZDYF-QYCX-021)。
文摘Spark,a distributed computing platform,has rapidly developed in the field of big data.Its in-memory computing feature reduces disk read overhead and shortens data processing time,making it have broad application prospects in large-scale computing applications such as machine learning and image processing.However,the performance of the Spark platform still needs to be improved.When a large number of tasks are processed simultaneously,Spark’s cache replacementmechanismcannot identify high-value data partitions,resulting inmemory resources not being fully utilized and affecting the performance of the Spark platform.To address the problem that Spark’s default cache replacement algorithm cannot accurately evaluate high-value data partitions,firstly the weight influence factors of data partitions are modeled and evaluated.Then,based on this weighted model,a cache replacement algorithm based on dynamic weighted data value is proposed,which takes into account hit rate and data difference.Better integration and usage strategies are implemented based on LRU(LeastRecentlyUsed).Theweight update algorithm updates the weight value when the data partition information changes,accurately measuring the importance of the partition in the current job;the cache removal algorithm clears partitions without useful values in the cache to releasememory resources;the weight replacement algorithm combines partition weights and partition information to replace RDD partitions when memory remaining space is insufficient.Finally,by setting up a Spark cluster environment,the algorithm proposed in this paper is experimentally verified.Experiments have shown that this algorithmcan effectively improve cache hit rate,enhance the performance of the platform,and reduce job execution time by 7.61%compared to existing improved algorithms.