The paper addresses the challenge of transmitting a big number offiles stored in a data center(DC),encrypting them by compilers,and sending them through a network at an acceptable time.Face to the big number offiles,o...The paper addresses the challenge of transmitting a big number offiles stored in a data center(DC),encrypting them by compilers,and sending them through a network at an acceptable time.Face to the big number offiles,only one compiler may not be sufficient to encrypt data in an acceptable time.In this paper,we consider the problem of several compilers and the objective is tofind an algorithm that can give an efficient schedule for the givenfiles to be compiled by the compilers.The main objective of the work is to minimize the gap in the total size of assignedfiles between compilers.This minimization ensures the fair distribution offiles to different compilers.This problem is considered to be a very hard problem.This paper presents two research axes.Thefirst axis is related to architecture.We propose a novel pre-compiler architecture in this context.The second axis is algorithmic development.We develop six algorithms to solve the problem,in this context.These algorithms are based on the dispatching rules method,decomposition method,and an iterative approach.These algorithms give approximate solutions for the studied problem.An experimental result is imple-mented to show the performance of algorithms.Several indicators are used to measure the performance of the proposed algorithms.In addition,five classes are proposed to test the algorithms with a total of 2350 instances.A comparison between the proposed algorithms is presented in different tables discussed to show the performance of each algorithm.The result showed that the best algorithm is the Iterative-mixed Smallest-Longest-Heuristic(ISL)with a percentage equal to 97.7%and an average running time equal to 0.148 s.All other algorithms did not exceed 22%as a percentage.The best algorithm excluding ISL is Iterative-mixed Longest-Smallest Heuristic(ILS)with a percentage equal to 21,4%and an average running time equal to 0.150 s.展开更多
This paper briefly introduces the systemic structure of Vocational English series--Basic English, and puts forwards the four key compiling principles, namely, system, cognition, practicality and interest.
Android, an open source system exploited by Google, has experienced a rapid development in the past a few years in the field of intelligent mobile because of its advantages-open source and excellent function. The numb...Android, an open source system exploited by Google, has experienced a rapid development in the past a few years in the field of intelligent mobile because of its advantages-open source and excellent function. The number of professionals and enthusiasts who research on Android is growing rapidly in the same time. Android, as an abstraction between software layer and hardware layer based on Linux kernel, can complete the optimization of system by modifying the kernel part. The purpose of this design is to master the processes of kernel-compiling and transplanting, and to learn the methods of memory scheduling algorithm and kernel menaory test. First of all, this thesis introduces the installation of Linux system, and then, it presents the method to build the environment for Android kernel compiling and the process of compiling. The key point of the design is to introduce the SLAB, SLOB, SLUB, SLQB allocators in memory scheduling, and carry on a research on optimization with these memory allocators. HTC Incredible S, as an experimental mobile phone whose Android kernel version is 2.6.35, is employed to deal with all these tests. A comparison of kernel codes before and after optimization has been made. The two kernel codes have been transplanted into the terminal of the experimental mobile phone, which will be respectively tested with its stability, memory performance and overall performance. Finally, it concludes that result of being transplanted the SLQB memory allocator is the optimal one of all.展开更多
At present, there are some static code analyses and optimizations that can be applied to Concurrent C programs to improve their performance or verify their logical correctness. These analyses and optimizations are int...At present, there are some static code analyses and optimizations that can be applied to Concurrent C programs to improve their performance or verify their logical correctness. These analyses and optimizations are inter-process. In order to make their implementation easy, we propose a new method to construct an optimizing compiling system CCOC for Concurrent C. CCOC supports inter-process code analysis and optimization to Concurrent C programs and does not affect the system's portability and separate compilation of source programs. We also discuss some implementation details of CCOC briefly.展开更多
Currently,there are many problems in construction of urban cemetery like improper location,low land utilization,backward greening facilities and imperfect cemetery management,which have greatly affected people's n...Currently,there are many problems in construction of urban cemetery like improper location,low land utilization,backward greening facilities and imperfect cemetery management,which have greatly affected people's normal production and life. This article discusses the establishment of a sustainable city cemetery planning and compiling system from three levels of " macro-view,medium-view and micro-view" in order to perfect the present cemetery system.展开更多
As China and Southeast Asian countries have accelerated and globalized their economic development, karst environmental problems have become increasingly prominent and studying on and compiling maps of karst geology is...As China and Southeast Asian countries have accelerated and globalized their economic development, karst environmental problems have become increasingly prominent and studying on and compiling maps of karst geology is quite important. Therefore, based on a wide collection of data in Southeast Asian countries, a cooperative map compilation has been carried out internationally. Through comprehensive research and analysis, a unified understanding has been achieved in terms of compiling principles, contents and representing methods, Distribution of Karst in Southern China and Southeast Asia(1/5 000 000) has been compiled, which provides foundations for environmental protection and scientific studies of karst geology.展开更多
Commemorate the 20th Anniversary of the Foundation of World Federation of Acupuncture-Moxibustion Societies (WFAS)ABSTRACT During the past 30 years, acupuncture has developed a lot in the whole world. The internationa...Commemorate the 20th Anniversary of the Foundation of World Federation of Acupuncture-Moxibustion Societies (WFAS)ABSTRACT During the past 30 years, acupuncture has developed a lot in the whole world. The international group of acupuncture has been enlarged and a new page was created in the cause of human health and medicine. At the eve of "The 20th Anniversary of the Foundation of WFAS—International Acupuncture Congress", in order to show the achievements in the circle of acupuncture and set up a new image of global acupuncture circle, expand the influence of acupuncture, promote international cooperation, World Journal of Acupuncture-Moxibustion will print a Special Issue in large scale, Acupuncture in the Whole World—Global Events Record of Acu-Moxi and calling for articles starts from now on.展开更多
With the development of maritime English teaching, there is a greater demand of high quality maritime English textbooks. This is a reflection of the diversification of new generation maritime English textbooks. But it...With the development of maritime English teaching, there is a greater demand of high quality maritime English textbooks. This is a reflection of the diversification of new generation maritime English textbooks. But it also brings forth the principles for the textbook compilation. Maritime English textbook construction should adhere to the principle of the cultivation in students' listening, speaking, reading, writing and translation and incorporate the linguistic theories, teaching concepts and the understanding of classroom teaching and students' learning, so that the students will continue to improve their English levels in the process of learning the special knowledge. Therefore, the problem we faced with is to make the scientific and rational principles and compile textbooks that suit the practice of maritime English teaching and learning.展开更多
Intemet era, mutual sharing, low cost, unlimited time and geographical restrictions on network dissemination, to the public toprovide a new way of entertainment experience and sharing in the network information resour...Intemet era, mutual sharing, low cost, unlimited time and geographical restrictions on network dissemination, to the public toprovide a new way of entertainment experience and sharing in the network information resources at the same time, also highlights importantdrawbacks, mainly reflected the contradiction between resource sharing and copyright protection, sharing is often cyber source violated theright to network dissemination of information to the original author. With the rapid development of the Internet, the seriousness of this problemis becoming increasingly prominent. Based on the information construction of university archives management as the center to carry outresearch, improve the service level of university archives from the first two aspects discusses the necessity of the information constructionof the archives management, promoting the development of colleges and universities. Secondly introduces specific measures of realizing theinformatization construction of university archives management and archives management standardization, digitization, archives informationnetwork construction, the archives management personnel to conduct a comprehensive training. The fixed assets of university is an important partof the state-owned assets, and its asset management level is directly related to the safety of state-owned assets, the use efficiency of assets and thepromotion of the teaching and research level in universities. The university fixed assets data information management is an important aspect ofasset management, it provides decision-making basis for the management of fixed assets in colleges and universities, affecting the efficiency ofthe entire asset management.展开更多
Objective: To compile nursing undergraduates' opinions and suggestions for biochemistry textbook to improve the readability and applicability of a textbook of biochemistry. Methods: This investigation involved 279 ...Objective: To compile nursing undergraduates' opinions and suggestions for biochemistry textbook to improve the readability and applicability of a textbook of biochemistry. Methods: This investigation involved 279 nursing undergraduates through delivery of a self-made questionnaire, which was to be used to analyze and study the students' suggestions and opinions on the content of the biochemistry textbook. Results: The investigation revealed that the textbook's difficulty was associated negatively with stu- dents' interest (P 〈 0.05), and the textbook's importance was associated positively with students' interest (P 〈 0.05). Students suggested that the textbook should be closely related to nursing. The contents, difficulty, structure and charts of the textbook should be adjusted properly. Conclusions: The content of the nursing version of the biochemistry textbook should be driven by the needs of the nursing major and should focus on simplifying the textbook's content, and making it practical. This will make the textbook vivid and readable, enhancing the interest of students to accept and benefit from the textbook.展开更多
This article inquires into theoretical and practical problems on compilation of an atlas of regional natural disasters. (1) The basic theory of compiling an atlas of regional disasters has been founded on the combinat...This article inquires into theoretical and practical problems on compilation of an atlas of regional natural disasters. (1) The basic theory of compiling an atlas of regional disasters has been founded on the combination of the sciences of disasters, cartography and regional geogrphy. The content structure of a regional disaster atlas should be composed of at least the following five parts:hazard-formative environments and hazard-affected bodies,hazards, disaster effects, monitoring and warning system for natural disasters, and countermeasures for natural disaster reduction. (2) Contents of cartographic design of a regional disaster atlas should be composed of at least the following five parts: base map system, cartographic representation, Symbol System, color system, map edition and map layout system. (3) Based on theory and cartographic design of compiling an atlas of regional disaster, the Atlas of Natural Disater in China, which reveals objectively the temporal and spaial pattern of regional natural disasters of China,has been compiled and published.展开更多
At present,there are two modes in the compilation of textbooks of traditional Chinese medicine,one is stuck in one's own way and the other is to copy Western medicine completely.The reason is that the textbooks of...At present,there are two modes in the compilation of textbooks of traditional Chinese medicine,one is stuck in one's own way and the other is to copy Western medicine completely.The reason is that the textbooks of traditional Chinese medicine colleges and universities are reduced into a hodger with no purpose,or the other is to cater to Western medicine system and give up the academic system of traditional Chinese medicine and lose the essence of traditional Chinese medicine.Through the"diagnosis and treatment strategy"and"knowledge link"of the new Chinese medicine textbooks,the author tries to make a breakthrough in the compilation,and puts forward the compilation ideas of rectifying the position of Chinese medicine textbooks,starting from the learning thinking of Western medicine students,and reflecting the characteristics of Chinese medicine and modern research results.展开更多
Edge devices,due to their limited computational and storage resources,often require the use of compilers for program optimization.Therefore,ensuring the security and reliability of these compilers is of paramount impo...Edge devices,due to their limited computational and storage resources,often require the use of compilers for program optimization.Therefore,ensuring the security and reliability of these compilers is of paramount importance in the emerging field of edge AI.One widely used testing method for this purpose is fuzz testing,which detects bugs by inputting random test cases into the target program.However,this process consumes significant time and resources.To improve the efficiency of compiler fuzz testing,it is common practice to utilize test case prioritization techniques.Some researchers use machine learning to predict the code coverage of test cases,aiming to maximize the test capability for the target compiler by increasing the overall predicted coverage of the test cases.Nevertheless,these methods can only forecast the code coverage of the compiler at a specific optimization level,potentially missing many optimization-related bugs.In this paper,we introduce C-CORE(short for Clustering by Code Representation),the first framework to prioritize test cases according to their code representations,which are derived directly from the source codes.This approach avoids being limited to specific compiler states and extends to a broader range of compiler bugs.Specifically,we first train a scaled pre-trained programming language model to capture as many common features as possible from the test cases generated by a fuzzer.Using this pre-trained model,we then train two downstream models:one for predicting the likelihood of triggering a bug and another for identifying code representations associated with bugs.Subsequently,we cluster the test cases according to their code representations and select the highest-scoring test case from each cluster as the high-quality test case.This reduction in redundant testing cases leads to time savings.Comprehensive evaluation results reveal that code representations are better at distinguishing test capabilities,and C-CORE significantly enhances testing efficiency.Across four datasets,C-CORE increases the average of the percentage of faults detected(APFD)value by 0.16 to 0.31 and reduces test time by over 50% in 46% of cases.When compared to the best results from approaches using predicted code coverage,C-CORE improves the APFD value by 1.1% to 12.3% and achieves an overall time-saving of 159.1%.展开更多
The highlevel Compiler Intermediate Language CIL is a generalpurpose descripotion language of parallel graph rewriting computational model intended for parallelimplementatioll of declarative languages on multiprocesso...The highlevel Compiler Intermediate Language CIL is a generalpurpose descripotion language of parallel graph rewriting computational model intended for parallelimplementatioll of declarative languages on multiprocessor systems. In this paper, wefirst outline a new Hybrid Execution Model(HEM) and corresponding parallel abstract machine PAM/TGR based on Extended parallel Graph Rewriting ComputationalModel EGRCM for implementing CIL language on distributed memory multiprocessorsystems. Then we focus on the compiling CIL language with various optindzing techniques such as pattern matching, rule indexing, node ordering and compile-time partialscheduling. The experimental results on a 16-node Thansputer Array demonstrates the effectiveness of our model and strategies.展开更多
Purpose:This study explores a novel approach to compiling life-oriented moral textbooks for elementary schools in China,specificallyfocusing onMorality and Law.Design/Approach/Methods:Adopting Aristotle's Poetics ...Purpose:This study explores a novel approach to compiling life-oriented moral textbooks for elementary schools in China,specificallyfocusing onMorality and Law.Design/Approach/Methods:Adopting Aristotle's Poetics as its theoretical perspective,this study illustrates and analyzes the mimetic approach used in compiling the life-oriented moral education textbook,Morality and Law.Findings:The mimetic approach involves imitating children's real activities,thoughts,and feelings in textbooks.The mimetic approach to compiling life-oriented moral textbooks comprises three strategies:constructing children's life events as building blocks for textbook compilation,designing an intricate textual device exposing the wholeness of children's life actions,and designing inward learning activities leading to children's innerworlds.Originality/Value:From the perspective of Aristotle's Poetics,the approach to compilation in Morality and Law can be defined as mimetic.And the compilation activity in the life-oriented moral education textbook also can be described as a process ofmimesis.So this article presents a new approach to compile moral education textbooks and an innovative way to understand the nature of one compiling activity.展开更多
As a large amount of data is increasingly generated from edge devices,such as smart homes,mobile phones,and wearable devices,it becomes crucial for many applications to deploy machine learning modes across edge device...As a large amount of data is increasingly generated from edge devices,such as smart homes,mobile phones,and wearable devices,it becomes crucial for many applications to deploy machine learning modes across edge devices.The execution speed of the deployed model is a key element to ensure service quality.Considering a highly heterogeneous edge deployment scenario,deep learning compiling is a novel approach that aims to solve this problem.It defines models using certain DSLs and generates efficient code implementations on different hardware devices.However,there are still two aspects that are not yet thoroughly investigated yet.The first is the optimization of memory-intensive operations,and the second problem is the heterogeneity of the deployment target.To that end,in this work,we propose a system solution that optimizes memory-intensive operation,optimizes the subgraph distribution,and enables the compiling and deployment of DNN models on multiple targets.The evaluation results show the performance of our proposed system.展开更多
Numerous clothing enterprises in the market have a relatively low efficiency of assembly line planning due to insufficient optimization of bottleneck stations.As a result,the production efficiency of the enterprise is...Numerous clothing enterprises in the market have a relatively low efficiency of assembly line planning due to insufficient optimization of bottleneck stations.As a result,the production efficiency of the enterprise is not high,and the production organization is not up to expectations.Aiming at the problem of flexible process route planning in garment workshops,a multi-object genetic algorithm is proposed to solve the assembly line bal-ance optimization problem and minimize the machine adjustment path.The encoding method adopts the object-oriented path representation method,and the initial population is generated by random topology sorting based on an in-degree selection mechanism.The multi-object genetic algorithm improves the mutation and crossover operations according to the characteristics of the clothing process to avoid the generation of invalid offspring.In the iterative process,the bottleneck station is optimized by reasonable process splitting,and process allocation conforms to the strict limit of the station on the number of machines in order to improve the compilation efficiency.The effectiveness and feasibility of the multi-object genetic algorithm are proven by the analysis of clothing cases.Compared with the artificial allocation process,the compilation efficiency of MOGA is increased by more than 15%and completes the optimization of the minimum machine adjustment path.The results are in line with the expected optimization effect.展开更多
基金The author would like to thank the Deanship of Scientific Research at Majmaah University for supporting this work under Project Number No.R-2022-85.
文摘The paper addresses the challenge of transmitting a big number offiles stored in a data center(DC),encrypting them by compilers,and sending them through a network at an acceptable time.Face to the big number offiles,only one compiler may not be sufficient to encrypt data in an acceptable time.In this paper,we consider the problem of several compilers and the objective is tofind an algorithm that can give an efficient schedule for the givenfiles to be compiled by the compilers.The main objective of the work is to minimize the gap in the total size of assignedfiles between compilers.This minimization ensures the fair distribution offiles to different compilers.This problem is considered to be a very hard problem.This paper presents two research axes.Thefirst axis is related to architecture.We propose a novel pre-compiler architecture in this context.The second axis is algorithmic development.We develop six algorithms to solve the problem,in this context.These algorithms are based on the dispatching rules method,decomposition method,and an iterative approach.These algorithms give approximate solutions for the studied problem.An experimental result is imple-mented to show the performance of algorithms.Several indicators are used to measure the performance of the proposed algorithms.In addition,five classes are proposed to test the algorithms with a total of 2350 instances.A comparison between the proposed algorithms is presented in different tables discussed to show the performance of each algorithm.The result showed that the best algorithm is the Iterative-mixed Smallest-Longest-Heuristic(ISL)with a percentage equal to 97.7%and an average running time equal to 0.148 s.All other algorithms did not exceed 22%as a percentage.The best algorithm excluding ISL is Iterative-mixed Longest-Smallest Heuristic(ILS)with a percentage equal to 21,4%and an average running time equal to 0.150 s.
文摘This paper briefly introduces the systemic structure of Vocational English series--Basic English, and puts forwards the four key compiling principles, namely, system, cognition, practicality and interest.
文摘Android, an open source system exploited by Google, has experienced a rapid development in the past a few years in the field of intelligent mobile because of its advantages-open source and excellent function. The number of professionals and enthusiasts who research on Android is growing rapidly in the same time. Android, as an abstraction between software layer and hardware layer based on Linux kernel, can complete the optimization of system by modifying the kernel part. The purpose of this design is to master the processes of kernel-compiling and transplanting, and to learn the methods of memory scheduling algorithm and kernel menaory test. First of all, this thesis introduces the installation of Linux system, and then, it presents the method to build the environment for Android kernel compiling and the process of compiling. The key point of the design is to introduce the SLAB, SLOB, SLUB, SLQB allocators in memory scheduling, and carry on a research on optimization with these memory allocators. HTC Incredible S, as an experimental mobile phone whose Android kernel version is 2.6.35, is employed to deal with all these tests. A comparison of kernel codes before and after optimization has been made. The two kernel codes have been transplanted into the terminal of the experimental mobile phone, which will be respectively tested with its stability, memory performance and overall performance. Finally, it concludes that result of being transplanted the SLQB memory allocator is the optimal one of all.
文摘At present, there are some static code analyses and optimizations that can be applied to Concurrent C programs to improve their performance or verify their logical correctness. These analyses and optimizations are inter-process. In order to make their implementation easy, we propose a new method to construct an optimizing compiling system CCOC for Concurrent C. CCOC supports inter-process code analysis and optimization to Concurrent C programs and does not affect the system's portability and separate compilation of source programs. We also discuss some implementation details of CCOC briefly.
文摘Currently,there are many problems in construction of urban cemetery like improper location,low land utilization,backward greening facilities and imperfect cemetery management,which have greatly affected people's normal production and life. This article discusses the establishment of a sustainable city cemetery planning and compiling system from three levels of " macro-view,medium-view and micro-view" in order to perfect the present cemetery system.
基金sponsored by China and Southeast Asia Karst Geology Series Maps Project, China Geological Survey (12120114006301)
文摘As China and Southeast Asian countries have accelerated and globalized their economic development, karst environmental problems have become increasingly prominent and studying on and compiling maps of karst geology is quite important. Therefore, based on a wide collection of data in Southeast Asian countries, a cooperative map compilation has been carried out internationally. Through comprehensive research and analysis, a unified understanding has been achieved in terms of compiling principles, contents and representing methods, Distribution of Karst in Southern China and Southeast Asia(1/5 000 000) has been compiled, which provides foundations for environmental protection and scientific studies of karst geology.
文摘Commemorate the 20th Anniversary of the Foundation of World Federation of Acupuncture-Moxibustion Societies (WFAS)ABSTRACT During the past 30 years, acupuncture has developed a lot in the whole world. The international group of acupuncture has been enlarged and a new page was created in the cause of human health and medicine. At the eve of "The 20th Anniversary of the Foundation of WFAS—International Acupuncture Congress", in order to show the achievements in the circle of acupuncture and set up a new image of global acupuncture circle, expand the influence of acupuncture, promote international cooperation, World Journal of Acupuncture-Moxibustion will print a Special Issue in large scale, Acupuncture in the Whole World—Global Events Record of Acu-Moxi and calling for articles starts from now on.
文摘With the development of maritime English teaching, there is a greater demand of high quality maritime English textbooks. This is a reflection of the diversification of new generation maritime English textbooks. But it also brings forth the principles for the textbook compilation. Maritime English textbook construction should adhere to the principle of the cultivation in students' listening, speaking, reading, writing and translation and incorporate the linguistic theories, teaching concepts and the understanding of classroom teaching and students' learning, so that the students will continue to improve their English levels in the process of learning the special knowledge. Therefore, the problem we faced with is to make the scientific and rational principles and compile textbooks that suit the practice of maritime English teaching and learning.
文摘Intemet era, mutual sharing, low cost, unlimited time and geographical restrictions on network dissemination, to the public toprovide a new way of entertainment experience and sharing in the network information resources at the same time, also highlights importantdrawbacks, mainly reflected the contradiction between resource sharing and copyright protection, sharing is often cyber source violated theright to network dissemination of information to the original author. With the rapid development of the Internet, the seriousness of this problemis becoming increasingly prominent. Based on the information construction of university archives management as the center to carry outresearch, improve the service level of university archives from the first two aspects discusses the necessity of the information constructionof the archives management, promoting the development of colleges and universities. Secondly introduces specific measures of realizing theinformatization construction of university archives management and archives management standardization, digitization, archives informationnetwork construction, the archives management personnel to conduct a comprehensive training. The fixed assets of university is an important partof the state-owned assets, and its asset management level is directly related to the safety of state-owned assets, the use efficiency of assets and thepromotion of the teaching and research level in universities. The university fixed assets data information management is an important aspect ofasset management, it provides decision-making basis for the management of fixed assets in colleges and universities, affecting the efficiency ofthe entire asset management.
文摘Objective: To compile nursing undergraduates' opinions and suggestions for biochemistry textbook to improve the readability and applicability of a textbook of biochemistry. Methods: This investigation involved 279 nursing undergraduates through delivery of a self-made questionnaire, which was to be used to analyze and study the students' suggestions and opinions on the content of the biochemistry textbook. Results: The investigation revealed that the textbook's difficulty was associated negatively with stu- dents' interest (P 〈 0.05), and the textbook's importance was associated positively with students' interest (P 〈 0.05). Students suggested that the textbook should be closely related to nursing. The contents, difficulty, structure and charts of the textbook should be adjusted properly. Conclusions: The content of the nursing version of the biochemistry textbook should be driven by the needs of the nursing major and should focus on simplifying the textbook's content, and making it practical. This will make the textbook vivid and readable, enhancing the interest of students to accept and benefit from the textbook.
文摘This article inquires into theoretical and practical problems on compilation of an atlas of regional natural disasters. (1) The basic theory of compiling an atlas of regional disasters has been founded on the combination of the sciences of disasters, cartography and regional geogrphy. The content structure of a regional disaster atlas should be composed of at least the following five parts:hazard-formative environments and hazard-affected bodies,hazards, disaster effects, monitoring and warning system for natural disasters, and countermeasures for natural disaster reduction. (2) Contents of cartographic design of a regional disaster atlas should be composed of at least the following five parts: base map system, cartographic representation, Symbol System, color system, map edition and map layout system. (3) Based on theory and cartographic design of compiling an atlas of regional disaster, the Atlas of Natural Disater in China, which reveals objectively the temporal and spaial pattern of regional natural disasters of China,has been compiled and published.
文摘At present,there are two modes in the compilation of textbooks of traditional Chinese medicine,one is stuck in one's own way and the other is to copy Western medicine completely.The reason is that the textbooks of traditional Chinese medicine colleges and universities are reduced into a hodger with no purpose,or the other is to cater to Western medicine system and give up the academic system of traditional Chinese medicine and lose the essence of traditional Chinese medicine.Through the"diagnosis and treatment strategy"and"knowledge link"of the new Chinese medicine textbooks,the author tries to make a breakthrough in the compilation,and puts forward the compilation ideas of rectifying the position of Chinese medicine textbooks,starting from the learning thinking of Western medicine students,and reflecting the characteristics of Chinese medicine and modern research results.
文摘Edge devices,due to their limited computational and storage resources,often require the use of compilers for program optimization.Therefore,ensuring the security and reliability of these compilers is of paramount importance in the emerging field of edge AI.One widely used testing method for this purpose is fuzz testing,which detects bugs by inputting random test cases into the target program.However,this process consumes significant time and resources.To improve the efficiency of compiler fuzz testing,it is common practice to utilize test case prioritization techniques.Some researchers use machine learning to predict the code coverage of test cases,aiming to maximize the test capability for the target compiler by increasing the overall predicted coverage of the test cases.Nevertheless,these methods can only forecast the code coverage of the compiler at a specific optimization level,potentially missing many optimization-related bugs.In this paper,we introduce C-CORE(short for Clustering by Code Representation),the first framework to prioritize test cases according to their code representations,which are derived directly from the source codes.This approach avoids being limited to specific compiler states and extends to a broader range of compiler bugs.Specifically,we first train a scaled pre-trained programming language model to capture as many common features as possible from the test cases generated by a fuzzer.Using this pre-trained model,we then train two downstream models:one for predicting the likelihood of triggering a bug and another for identifying code representations associated with bugs.Subsequently,we cluster the test cases according to their code representations and select the highest-scoring test case from each cluster as the high-quality test case.This reduction in redundant testing cases leads to time savings.Comprehensive evaluation results reveal that code representations are better at distinguishing test capabilities,and C-CORE significantly enhances testing efficiency.Across four datasets,C-CORE increases the average of the percentage of faults detected(APFD)value by 0.16 to 0.31 and reduces test time by over 50% in 46% of cases.When compared to the best results from approaches using predicted code coverage,C-CORE improves the APFD value by 1.1% to 12.3% and achieves an overall time-saving of 159.1%.
文摘The highlevel Compiler Intermediate Language CIL is a generalpurpose descripotion language of parallel graph rewriting computational model intended for parallelimplementatioll of declarative languages on multiprocessor systems. In this paper, wefirst outline a new Hybrid Execution Model(HEM) and corresponding parallel abstract machine PAM/TGR based on Extended parallel Graph Rewriting ComputationalModel EGRCM for implementing CIL language on distributed memory multiprocessorsystems. Then we focus on the compiling CIL language with various optindzing techniques such as pattern matching, rule indexing, node ordering and compile-time partialscheduling. The experimental results on a 16-node Thansputer Array demonstrates the effectiveness of our model and strategies.
文摘Purpose:This study explores a novel approach to compiling life-oriented moral textbooks for elementary schools in China,specificallyfocusing onMorality and Law.Design/Approach/Methods:Adopting Aristotle's Poetics as its theoretical perspective,this study illustrates and analyzes the mimetic approach used in compiling the life-oriented moral education textbook,Morality and Law.Findings:The mimetic approach involves imitating children's real activities,thoughts,and feelings in textbooks.The mimetic approach to compiling life-oriented moral textbooks comprises three strategies:constructing children's life events as building blocks for textbook compilation,designing an intricate textual device exposing the wholeness of children's life actions,and designing inward learning activities leading to children's innerworlds.Originality/Value:From the perspective of Aristotle's Poetics,the approach to compilation in Morality and Law can be defined as mimetic.And the compilation activity in the life-oriented moral education textbook also can be described as a process ofmimesis.So this article presents a new approach to compile moral education textbooks and an innovative way to understand the nature of one compiling activity.
基金supported by the National Natural Science Foundation of China(U21A20519)。
文摘As a large amount of data is increasingly generated from edge devices,such as smart homes,mobile phones,and wearable devices,it becomes crucial for many applications to deploy machine learning modes across edge devices.The execution speed of the deployed model is a key element to ensure service quality.Considering a highly heterogeneous edge deployment scenario,deep learning compiling is a novel approach that aims to solve this problem.It defines models using certain DSLs and generates efficient code implementations on different hardware devices.However,there are still two aspects that are not yet thoroughly investigated yet.The first is the optimization of memory-intensive operations,and the second problem is the heterogeneity of the deployment target.To that end,in this work,we propose a system solution that optimizes memory-intensive operation,optimizes the subgraph distribution,and enables the compiling and deployment of DNN models on multiple targets.The evaluation results show the performance of our proposed system.
基金supported by Key R&D project of Zhejiang Province (2018C01005),http://kjt.zj.gov.cn/.
文摘Numerous clothing enterprises in the market have a relatively low efficiency of assembly line planning due to insufficient optimization of bottleneck stations.As a result,the production efficiency of the enterprise is not high,and the production organization is not up to expectations.Aiming at the problem of flexible process route planning in garment workshops,a multi-object genetic algorithm is proposed to solve the assembly line bal-ance optimization problem and minimize the machine adjustment path.The encoding method adopts the object-oriented path representation method,and the initial population is generated by random topology sorting based on an in-degree selection mechanism.The multi-object genetic algorithm improves the mutation and crossover operations according to the characteristics of the clothing process to avoid the generation of invalid offspring.In the iterative process,the bottleneck station is optimized by reasonable process splitting,and process allocation conforms to the strict limit of the station on the number of machines in order to improve the compilation efficiency.The effectiveness and feasibility of the multi-object genetic algorithm are proven by the analysis of clothing cases.Compared with the artificial allocation process,the compilation efficiency of MOGA is increased by more than 15%and completes the optimization of the minimum machine adjustment path.The results are in line with the expected optimization effect.