The developments of multi-core systems(MCS)have considerably improved the existing technologies in thefield of computer architecture.The MCS comprises several processors that are heterogeneous for resource capacities,...The developments of multi-core systems(MCS)have considerably improved the existing technologies in thefield of computer architecture.The MCS comprises several processors that are heterogeneous for resource capacities,working environments,topologies,and so on.The existing multi-core technology unlocks additional research opportunities for energy minimization by the use of effective task scheduling.At the same time,the task scheduling process is yet to be explored in the multi-core systems.This paper presents a new hybrid genetic algorithm(GA)with a krill herd(KH)based energy-efficient scheduling techni-que for multi-core systems(GAKH-SMCS).The goal of the GAKH-SMCS tech-nique is to derive scheduling tasks in such a way to achieve faster completion time and minimum energy dissipation.The GAKH-SMCS model involves a multi-objectivefitness function using four parameters such as makespan,processor utilization,speedup,and energy consumption to schedule tasks proficiently.The performance of the GAKH-SMCS model has been validated against two datasets namely random dataset and benchmark dataset.The experimental outcome ensured the effectiveness of the GAKH-SMCS model interms of makespan,pro-cessor utilization,speedup,and energy consumption.The overall simulation results depicted that the presented GAKH-SMCS model achieves energy effi-ciency by optimal task scheduling process in MCS.展开更多
Real-time system timing analysis is crucial for estimating the worst-case execution time(WCET)of a program.To achieve this,static or dynamic analysis methods are used,along with targeted modeling of the actual hardwar...Real-time system timing analysis is crucial for estimating the worst-case execution time(WCET)of a program.To achieve this,static or dynamic analysis methods are used,along with targeted modeling of the actual hardware system.This literature review focuses on calculating WCET for multi-core processors,providing a survey of traditional methods used for static and dynamic analysis and highlighting the major challenges that arise from different program execution scenarios on multi-core platforms.This paper outlines the strengths and weaknesses of current methodologies and offers insights into prospective areas of research on multi-core analysis.By presenting a comprehensive analysis of the current state of research on multi-core processor analysis for WCET estimation,this review aims to serve as a valuable resource for researchers and practitioners in the field.展开更多
Modern shared-memory multi-core processors typically have shared Level 2(L2)or Level 3(L3)caches.Cache bottlenecks and replacement strategies are the main problems of such architectures,where multiple cores try to acc...Modern shared-memory multi-core processors typically have shared Level 2(L2)or Level 3(L3)caches.Cache bottlenecks and replacement strategies are the main problems of such architectures,where multiple cores try to access the shared cache simultaneously.The main problem in improving memory performance is the shared cache architecture and cache replacement.This paper documents the implementation of a Dual-Port Content Addressable Memory(DPCAM)and a modified Near-Far Access Replacement Algorithm(NFRA),which was previously proposed as a shared L2 cache layer in a multi-core processor.Standard Performance Evaluation Corporation(SPEC)Central Processing Unit(CPU)2006 benchmark workloads are used to evaluate the benefit of the shared L2 cache layer.Results show improved performance of the multicore processor’s DPCAM and NFRA algorithms,corresponding to a higher number of concurrent accesses to shared memory.The new architecture significantly increases system throughput and records performance improvements of up to 8.7%on various types of SPEC 2006 benchmarks.The miss rate is also improved by about 13%,with some exceptions in the sphinx3 and bzip2 benchmarks.These results could open a new window for solving the long-standing problems with shared cache in multi-core processors.展开更多
Packet classification has been studied for decades; it classifies packets into specific flows based on a given rule set. As software-defined network was proposed, a recent trend of packet classification is to scale th...Packet classification has been studied for decades; it classifies packets into specific flows based on a given rule set. As software-defined network was proposed, a recent trend of packet classification is to scale the five-tuple model to multi-tuple. In general, packet classification on multiple fields is a complex problem. Although most existing software-based algorithms have been proved extraordinary in practice, they are only suitable for the classic five-tuple model and difficult to be scaled up. Meanwhile, hardware-specific solutions are inflexible and expensive, and some of them are power consuming. In this paper, we propose a universal multi-dimensional packet classification approach for multi-core systems. In our approach, novel data structures and four decomposition-based algorithms are designed to optimize the classification and updating of rules. For multi-field rules, a rule set is cut into several parts according to the number of fields. Each part works independently. In this way, the fields are searched in parallel and all the partial results are merged together at last. To demonstrate the feasibility of our approach, we implement a prototype and evaluate its throughput and latency. Experimental results show that our approach achieves a 40% higher throughput than that of other decomposed-based algorithms and a 43% lower latency of rule incremental update than that of the other algorithms on average. Furthermore, our approach saves 39% memory consumption on average and has a good scalability.展开更多
A novel kind of multi-core magnetic composite particles, the surfaces of which were respectively mo- dified with goat-anti-mouse IgG and antitransferrin receptor(anti-CD71), was prepared. The fetal nucleated red blo...A novel kind of multi-core magnetic composite particles, the surfaces of which were respectively mo- dified with goat-anti-mouse IgG and antitransferrin receptor(anti-CD71), was prepared. The fetal nucleated red blood cells(FNRBCs) in the peripheral blood of a gravida were rapidly and effectively enriched and separated by the mo- dified multi-core magnetic composite particles in an external magnetic field. The obtained FNRBCs were used for the identification of the fetal sex by means of fluorescence in situ hybridization(FISH) technique. The results demonstrate that the multi-core magnetic composite particles meet the requirements for the enrichment and speration of FNRBCs with a low concentration and the accuracy of detetion for the diagnosis of fetal sex reached to 95%. Moreover, the obtained FNRBCs were applied to the non-invasive diagnosis of Down syndrome and chromosome 3p21 was de- tected. The above facts indicate that the novel multi-core magnetic composite particles-based method is simple, relia- ble and cost-effective and has opened up vast vistas for the potential application in clinic non-invasive prenatal diag- nosis.展开更多
A variation-aware task mapping approach is proposed for a multi-core network-on-chips with redundant cores, which includes both the design-time mapping and run-time scheduling algorithms. Firstly, a design-time geneti...A variation-aware task mapping approach is proposed for a multi-core network-on-chips with redundant cores, which includes both the design-time mapping and run-time scheduling algorithms. Firstly, a design-time genetic task mapping algorithm is proposed during the design stage to generate multiple task mapping solutions which cover a maximum range of chips. Then, during the run, one optimal task mapping solution is selected. Additionally, logical cores are mapped to physically available cores. Both core asymmetry and topological changes are considered in the proposed approach. Experimental results show that the performance yield of the proposed approach is 96% on average, and the communication cost, power consumption and peak temperature are all optimized without loss of performance yield.展开更多
Developing parallel applications on heterogeneous processors is facing the challenges of 'memory wall',due to limited capacity of local storage,limited bandwidth and long latency for memory access. Aiming at t...Developing parallel applications on heterogeneous processors is facing the challenges of 'memory wall',due to limited capacity of local storage,limited bandwidth and long latency for memory access. Aiming at this problem,a parallelization approach was proposed with six memory optimization schemes for CG,four schemes of them aiming at all kinds of sparse matrix-vector multiplication (SPMV) operation. Conducted on IBM QS20,the parallelization approach can reach up to 21 and 133 times speedups with size A and B,respectively,compared with single power processor element. Finally,the conclusion is drawn that the peak bandwidth of memory access on Cell BE can be obtained in SPMV,simple computation is more efficient on heterogeneous processors and loop-unrolling can hide local storage access latency while executing scalar operation on SIMD cores.展开更多
Contemporary operating systems for single-ISA (instruction set architecture) multi-core systems attempt to distribute tasks equally among all the CPUs. This approach works relatively well when there is no difference...Contemporary operating systems for single-ISA (instruction set architecture) multi-core systems attempt to distribute tasks equally among all the CPUs. This approach works relatively well when there is no difference in CPU capability. However, there are cases in which CPU capability differs from one another. For instance, static capability asymmetry results from the advent of new asymmetric hardware, and dynamic capability asymmetry comes from the operating system (OS) outside noise caused from networking or I/O handling. These asymmetries can make it hard for the OS scheduler to evenly distribute the tasks, resulting in less efficient load balancing. In this paper, we propose a user-level load balaneer for parallel applications, called the 'capability balancer', which recognizes the difference of CPU capability and makes subtasks share the entire CPU capability fairly. The balancer can coexist with the existing kemel-level load balancer without detrimenting the behavior of the kernel balancer. The capability balancer can fairly distribute CPU capability to tasks with very little overhead. For real workloads like the NAS Parallel Benchmark (NPB), we have accomplished speedups of up to 9.8% and 8.5% in dynamic and static asymmetries, respectively. We have also experienced speedups of 13.3% for dynamic asymmetry and 24.1% for static asymmetry in a competitive environment. The impacts of our task selection policies, FIFO (first in, first out) and cache, were compared. The use of the cache policy led to a speedup of 5.3% in overall execution time and a decrease of 4.7% in the overall cache miss count, compared with the FIFO policy, which is used by default.展开更多
Decreasing mode coupling coefficient(κ) is an effective approach to suppress the inter-core crosstalk. Therefore, we deploy a low index rod and rectangle trench in the middle of two neighboring cores to reduce κ so ...Decreasing mode coupling coefficient(κ) is an effective approach to suppress the inter-core crosstalk. Therefore, we deploy a low index rod and rectangle trench in the middle of two neighboring cores to reduce κ so that the overlap of electric field distribution can be suppressed. We also propose approximate analytical solution(AAS) for κ of two crosstalk suppression models, which are two cores with one low index rod deployed in the middle and two cores with one low index rectangle trench deployed in the middle. We then do some modification for the results obtained by AAS and the modified results are proved to agree well with that obtained by finite element method(FEM). Therefore, we can use the modified AAS to get inter-core crosstalk for abovementioned two models quickly.展开更多
In this paper, the influencing factors that affect few-mode and multi core optical fiber channel are analyzed in a comprehensive way. The theoretical modeling and computer simulation of the information channel are car...In this paper, the influencing factors that affect few-mode and multi core optical fiber channel are analyzed in a comprehensive way. The theoretical modeling and computer simulation of the information channel are carried out and then the modeling scheme of few-mode multicore optical fiber channel based on non-uniform mode field distribution is put forward. The proposed modeling scheme can not only exponentially increases the system capacity through fewmode multi-core optical fiber channel, but has better transmission performance compared to the channel of the same type to the uniform channel revealing from the simulation results.展开更多
This paper focuses on how to optimize the cache performance of sparse matrix-matrix multiplication(SpGEMM).It classifies the cache misses into two categories;one is caused by the irregular distribution pattern of the ...This paper focuses on how to optimize the cache performance of sparse matrix-matrix multiplication(SpGEMM).It classifies the cache misses into two categories;one is caused by the irregular distribution pattern of the multiplier-matrix,and the other is caused by the multiplicand.For each of them,the paper puts forward an optimization method respectively.The first hash based method removes cache misses of the 1 st category effectively,and improves the performance by a factor of 6 on an Intel 8-core CPU for the best cases.For cache misses of the 2nd category,it proposes a new cache replacement algorithm,which achieves a cache hit rate much higher than other historical knowledge based algorithms,and the algorithm is applicable on CELL and GPU.To further verify the effectiveness of our methods,we implement our algorithm on GPU,and the performance perfectly scales with the size of on-chip storage.展开更多
The state-of-the-art multi-core computer systems are based on Very Large Scale three Dimensional (3D) Integrated circuits (VLSI). In order to provide high-speed vertical data transmission in such 3D systems, efficient...The state-of-the-art multi-core computer systems are based on Very Large Scale three Dimensional (3D) Integrated circuits (VLSI). In order to provide high-speed vertical data transmission in such 3D systems, efficient Through-Silicon Via (TSV) technology is critically important. In this paper, various Radio Frequency (RF) TSV designs and models are proposed. Specifically, the Cu-plug TSV with surrounding ground TSVs is used as the baseline structure. For further improvement, the dielectric coaxial and novel air-gap coaxial TSVs are introduced. Using the empirical parameters of these coaxial TSVs, the simulation results are obtained demonstrating that these coaxial RF-TSVs can provide two-order higher of cut-off frequencies than the Cu-plug TSVs. Based on these new RF-TSV technologies, we propose a novel 3D multi-core computer system as well as new architectures for manipulating the interfaces between RF and baseband circuit. Taking into consideration the scaling down of IC manufacture technologies, predictions for the performance of future generations of circuits are made. With simulation results indicating energy per bit and area per bit being reduced by 7% and 11% respectively, we can conclude that the proposed method is a worthwhile guideline for the design of future multi-core computer ICs.展开更多
In order to improve the concurrent access performance of the web-based spatial computing system in cluster,a parallel scheduling strategy based on the multi-core environment is proposed,which includes two levels of pa...In order to improve the concurrent access performance of the web-based spatial computing system in cluster,a parallel scheduling strategy based on the multi-core environment is proposed,which includes two levels of parallel processing mechanisms.One is that it can evenly allocate tasks to each server node in the cluster and the other is that it can implement the load balancing inside a server node.Based on the strategy,a new web-based spatial computing model is designed in this paper,in which,a task response ratio calculation method,a request queue buffer mechanism and a thread scheduling strategy are focused on.Experimental results show that the new model can fully use the multi-core computing advantage of each server node in the concurrent access environment and improve the average hits per second,average I/O Hits,CPU utilization and throughput.Using speed-up ratio to analyze the traditional model and the new one,the result shows that the new model has the best performance.The performance of the multi-core server nodes in the cluster is optimized;the resource utilization and the parallel processing capabilities are enhanced.The more CPU cores you have,the higher parallel processing capabilities will be obtained.展开更多
The first important problem in the star forming process is the formation of proto star core in star forming regions of molecular cloud. The multi core structure in star forming regions is related to the forming of pro...The first important problem in the star forming process is the formation of proto star core in star forming regions of molecular cloud. The multi core structure in star forming regions is related to the forming of proto star core. The molecular radiation of C 18 O( J = 1-0) in Cepheus C has been observed. The C 18 O( J = 1-0) observations form the basis for an interesting study on the cloud cores and star formation activity in the cores of the Cepheus C. In order to study the multi core structure of C 18 O( J = 1-0) in the Cepheus C the channel maps and the position velocity diagrams of C 18 O( J = 1-0) will be shown. From the maps it is found that the contour level and distribution size of the three cores in Cepheus C are related to the channel velocity very much. The channel velocity of C 18 O( J = 1-0) molecules in core b, which distributed in all the channels velocity, is different with one in core a and core c very much. The C 18 O( J = 1-0) molecules in core a and core c of the Cepheus C mostly distributed in the blue shifted channel velocity relating to peak velocity, and only in -10.0 ~ -9.5 km/s, which is the red shifted channel velocity relating to peak velocity. And the contour level of C 18 O( J = 1-0) in -10.0 ~ -9.5 km/s is small and the distrbution size in the channel map is small. According to the position velocity diagrams the asymmetry of the distribution both blue shifted and red shifted components should reflect the asymmetry of the profile. From the diagrams it also is found that the contour level and the distribution size of the three cores are different from each other. Both results from the maps and diagrams are coincident with each other.展开更多
The Long Term Evolution (LTE) system imposes high requirements for dispatching delay.Moreover,very large air interface rate of LTE requires good processing capability for the devices processing the baseband signals.Co...The Long Term Evolution (LTE) system imposes high requirements for dispatching delay.Moreover,very large air interface rate of LTE requires good processing capability for the devices processing the baseband signals.Consequently,the single-core processor cannot meet the requirements of LTE system.This paper analyzes how to use multi-core processors to achieve parallel processing of uplink demodulation and decoding in LTE systems and designs an approach to parallel processing.The test results prove that this approach works quite well.展开更多
In this paper, a study related to the expected performance behaviour of present 3-level cache system for multi-core systems is presented. For this a queuing model for present 3-level cache system for multi-core proces...In this paper, a study related to the expected performance behaviour of present 3-level cache system for multi-core systems is presented. For this a queuing model for present 3-level cache system for multi-core processors is developed and its possible performance has been analyzed with the increase in number of cores. Various important performance parameters like access time and utilization of individual cache at different level and overall average access time of the cache system is determined. Results for up to 1024 cores have been reported in this paper.展开更多
Frequent data exchange among all kinds of memories has become an inevitable phenomenon in the process of modern embeddedsoftware design. In order to improve the ability of the embedded system data's throughput and co...Frequent data exchange among all kinds of memories has become an inevitable phenomenon in the process of modern embeddedsoftware design. In order to improve the ability of the embedded system data's throughput and computation, most embeddeddevices introduce Enhanced Direct Memory Access (EDMA) data transfer technology. TMS320C6678 is a multi-core DSPproduced by Texas Instruments (TI). There are ten EDMA transmission controllers in the chip for configuration and datatransmissions are allowed to be performed between any two pieces of storage at the same time. This paper expounds the workingmechanism of EDMA based on multi-core DSP TMS320C6678. At the same time, multiple data sets are provided and thebottleneck of limiting data throughout is analyzed and solved.展开更多
基金supported by Taif University Researchers Supporting Program(Project Number:TURSP-2020/195)Taif University,Saudi Arabia.Princess Nourah bint Abdulrahman University Researchers Supporting Project number(PNURSP2022R203)Princess Nourah bint Abdulrahman University,Riyadh,Saudi Arabia.
文摘The developments of multi-core systems(MCS)have considerably improved the existing technologies in thefield of computer architecture.The MCS comprises several processors that are heterogeneous for resource capacities,working environments,topologies,and so on.The existing multi-core technology unlocks additional research opportunities for energy minimization by the use of effective task scheduling.At the same time,the task scheduling process is yet to be explored in the multi-core systems.This paper presents a new hybrid genetic algorithm(GA)with a krill herd(KH)based energy-efficient scheduling techni-que for multi-core systems(GAKH-SMCS).The goal of the GAKH-SMCS tech-nique is to derive scheduling tasks in such a way to achieve faster completion time and minimum energy dissipation.The GAKH-SMCS model involves a multi-objectivefitness function using four parameters such as makespan,processor utilization,speedup,and energy consumption to schedule tasks proficiently.The performance of the GAKH-SMCS model has been validated against two datasets namely random dataset and benchmark dataset.The experimental outcome ensured the effectiveness of the GAKH-SMCS model interms of makespan,pro-cessor utilization,speedup,and energy consumption.The overall simulation results depicted that the presented GAKH-SMCS model achieves energy effi-ciency by optimal task scheduling process in MCS.
基金supported by ZTE Industry-University-Institute Cooperation Funds under Grant No.2022ZTE09.
文摘Real-time system timing analysis is crucial for estimating the worst-case execution time(WCET)of a program.To achieve this,static or dynamic analysis methods are used,along with targeted modeling of the actual hardware system.This literature review focuses on calculating WCET for multi-core processors,providing a survey of traditional methods used for static and dynamic analysis and highlighting the major challenges that arise from different program execution scenarios on multi-core platforms.This paper outlines the strengths and weaknesses of current methodologies and offers insights into prospective areas of research on multi-core analysis.By presenting a comprehensive analysis of the current state of research on multi-core processor analysis for WCET estimation,this review aims to serve as a valuable resource for researchers and practitioners in the field.
文摘Modern shared-memory multi-core processors typically have shared Level 2(L2)or Level 3(L3)caches.Cache bottlenecks and replacement strategies are the main problems of such architectures,where multiple cores try to access the shared cache simultaneously.The main problem in improving memory performance is the shared cache architecture and cache replacement.This paper documents the implementation of a Dual-Port Content Addressable Memory(DPCAM)and a modified Near-Far Access Replacement Algorithm(NFRA),which was previously proposed as a shared L2 cache layer in a multi-core processor.Standard Performance Evaluation Corporation(SPEC)Central Processing Unit(CPU)2006 benchmark workloads are used to evaluate the benefit of the shared L2 cache layer.Results show improved performance of the multicore processor’s DPCAM and NFRA algorithms,corresponding to a higher number of concurrent accesses to shared memory.The new architecture significantly increases system throughput and records performance improvements of up to 8.7%on various types of SPEC 2006 benchmarks.The miss rate is also improved by about 13%,with some exceptions in the sphinx3 and bzip2 benchmarks.These results could open a new window for solving the long-standing problems with shared cache in multi-core processors.
基金This work was supported by the National Basic Research 973 Program of China under Grant No. 2012CB315805 and the National Natural Science Foundation of China under Grant Nos. 61472130 and 61702174.
文摘Packet classification has been studied for decades; it classifies packets into specific flows based on a given rule set. As software-defined network was proposed, a recent trend of packet classification is to scale the five-tuple model to multi-tuple. In general, packet classification on multiple fields is a complex problem. Although most existing software-based algorithms have been proved extraordinary in practice, they are only suitable for the classic five-tuple model and difficult to be scaled up. Meanwhile, hardware-specific solutions are inflexible and expensive, and some of them are power consuming. In this paper, we propose a universal multi-dimensional packet classification approach for multi-core systems. In our approach, novel data structures and four decomposition-based algorithms are designed to optimize the classification and updating of rules. For multi-field rules, a rule set is cut into several parts according to the number of fields. Each part works independently. In this way, the fields are searched in parallel and all the partial results are merged together at last. To demonstrate the feasibility of our approach, we implement a prototype and evaluate its throughput and latency. Experimental results show that our approach achieves a 40% higher throughput than that of other decomposed-based algorithms and a 43% lower latency of rule incremental update than that of the other algorithms on average. Furthermore, our approach saves 39% memory consumption on average and has a good scalability.
文摘A novel kind of multi-core magnetic composite particles, the surfaces of which were respectively mo- dified with goat-anti-mouse IgG and antitransferrin receptor(anti-CD71), was prepared. The fetal nucleated red blood cells(FNRBCs) in the peripheral blood of a gravida were rapidly and effectively enriched and separated by the mo- dified multi-core magnetic composite particles in an external magnetic field. The obtained FNRBCs were used for the identification of the fetal sex by means of fluorescence in situ hybridization(FISH) technique. The results demonstrate that the multi-core magnetic composite particles meet the requirements for the enrichment and speration of FNRBCs with a low concentration and the accuracy of detetion for the diagnosis of fetal sex reached to 95%. Moreover, the obtained FNRBCs were applied to the non-invasive diagnosis of Down syndrome and chromosome 3p21 was de- tected. The above facts indicate that the novel multi-core magnetic composite particles-based method is simple, relia- ble and cost-effective and has opened up vast vistas for the potential application in clinic non-invasive prenatal diag- nosis.
文摘A variation-aware task mapping approach is proposed for a multi-core network-on-chips with redundant cores, which includes both the design-time mapping and run-time scheduling algorithms. Firstly, a design-time genetic task mapping algorithm is proposed during the design stage to generate multiple task mapping solutions which cover a maximum range of chips. Then, during the run, one optimal task mapping solution is selected. Additionally, logical cores are mapped to physically available cores. Both core asymmetry and topological changes are considered in the proposed approach. Experimental results show that the performance yield of the proposed approach is 96% on average, and the communication cost, power consumption and peak temperature are all optimized without loss of performance yield.
基金Project(2008AA01A201) supported the National High-tech Research and Development Program of ChinaProjects(60833004, 60633050) supported by the National Natural Science Foundation of China
文摘Developing parallel applications on heterogeneous processors is facing the challenges of 'memory wall',due to limited capacity of local storage,limited bandwidth and long latency for memory access. Aiming at this problem,a parallelization approach was proposed with six memory optimization schemes for CG,four schemes of them aiming at all kinds of sparse matrix-vector multiplication (SPMV) operation. Conducted on IBM QS20,the parallelization approach can reach up to 21 and 133 times speedups with size A and B,respectively,compared with single power processor element. Finally,the conclusion is drawn that the peak bandwidth of memory access on Cell BE can be obtained in SPMV,simple computation is more efficient on heterogeneous processors and loop-unrolling can hide local storage access latency while executing scalar operation on SIMD cores.
基金supported by the Next-Generation Information Computing Development Program through the National Research Foundation of Korea funded by the Ministry of Education, Science and Technology(No. 2011-0020521)the Korea Communications Commission,under the Communications Policy Research Center Support Program supervised by the Korea Communications Agency (No. KCA-2011-1194100004-110010100)
文摘Contemporary operating systems for single-ISA (instruction set architecture) multi-core systems attempt to distribute tasks equally among all the CPUs. This approach works relatively well when there is no difference in CPU capability. However, there are cases in which CPU capability differs from one another. For instance, static capability asymmetry results from the advent of new asymmetric hardware, and dynamic capability asymmetry comes from the operating system (OS) outside noise caused from networking or I/O handling. These asymmetries can make it hard for the OS scheduler to evenly distribute the tasks, resulting in less efficient load balancing. In this paper, we propose a user-level load balaneer for parallel applications, called the 'capability balancer', which recognizes the difference of CPU capability and makes subtasks share the entire CPU capability fairly. The balancer can coexist with the existing kemel-level load balancer without detrimenting the behavior of the kernel balancer. The capability balancer can fairly distribute CPU capability to tasks with very little overhead. For real workloads like the NAS Parallel Benchmark (NPB), we have accomplished speedups of up to 9.8% and 8.5% in dynamic and static asymmetries, respectively. We have also experienced speedups of 13.3% for dynamic asymmetry and 24.1% for static asymmetry in a competitive environment. The impacts of our task selection policies, FIFO (first in, first out) and cache, were compared. The use of the cache policy led to a speedup of 5.3% in overall execution time and a decrease of 4.7% in the overall cache miss count, compared with the FIFO policy, which is used by default.
基金supported by National B a-sic Research Program of China(Grant No.2012CB315905)National Natural Science Foundation of China(Grant No.61501027)+1 种基金China Postdoctoral Science Foundation(Grant No.2015M570934)Fundamental Research Funds for the Central Universities(Grant No.FRF-TP-15-031A1)
文摘Decreasing mode coupling coefficient(κ) is an effective approach to suppress the inter-core crosstalk. Therefore, we deploy a low index rod and rectangle trench in the middle of two neighboring cores to reduce κ so that the overlap of electric field distribution can be suppressed. We also propose approximate analytical solution(AAS) for κ of two crosstalk suppression models, which are two cores with one low index rod deployed in the middle and two cores with one low index rectangle trench deployed in the middle. We then do some modification for the results obtained by AAS and the modified results are proved to agree well with that obtained by finite element method(FEM). Therefore, we can use the modified AAS to get inter-core crosstalk for abovementioned two models quickly.
基金supports from National High Technology 863 Program of China(No.2013AA013403,2015AA015501,2015AA015502,2015AA015504)National NSFC(No.61425022/61522501/61307086/61475024/61275158/61201151/61275074/61372109)+4 种基金Beijing Nova Program(No.Z141101001814048)Beijing Excellent Ph.D.Thesis Guidance Foundation(No.20121001302)the Universities Ph.D.Special Research Funds(No.20120005110003/20120005120007)Fund of State Key Laboratory of IPOC(BUPT)P.R.China
文摘In this paper, the influencing factors that affect few-mode and multi core optical fiber channel are analyzed in a comprehensive way. The theoretical modeling and computer simulation of the information channel are carried out and then the modeling scheme of few-mode multicore optical fiber channel based on non-uniform mode field distribution is put forward. The proposed modeling scheme can not only exponentially increases the system capacity through fewmode multi-core optical fiber channel, but has better transmission performance compared to the channel of the same type to the uniform channel revealing from the simulation results.
基金Supported by the National High Technology Research and Development Programme of China(No.2010AA012302,2009AA01 A134)Tsinghua National Laboratory for Information Science and Technology(TNList)Cross-discipline Foundation
文摘This paper focuses on how to optimize the cache performance of sparse matrix-matrix multiplication(SpGEMM).It classifies the cache misses into two categories;one is caused by the irregular distribution pattern of the multiplier-matrix,and the other is caused by the multiplicand.For each of them,the paper puts forward an optimization method respectively.The first hash based method removes cache misses of the 1 st category effectively,and improves the performance by a factor of 6 on an Intel 8-core CPU for the best cases.For cache misses of the 2nd category,it proposes a new cache replacement algorithm,which achieves a cache hit rate much higher than other historical knowledge based algorithms,and the algorithm is applicable on CELL and GPU.To further verify the effectiveness of our methods,we implement our algorithm on GPU,and the performance perfectly scales with the size of on-chip storage.
文摘The state-of-the-art multi-core computer systems are based on Very Large Scale three Dimensional (3D) Integrated circuits (VLSI). In order to provide high-speed vertical data transmission in such 3D systems, efficient Through-Silicon Via (TSV) technology is critically important. In this paper, various Radio Frequency (RF) TSV designs and models are proposed. Specifically, the Cu-plug TSV with surrounding ground TSVs is used as the baseline structure. For further improvement, the dielectric coaxial and novel air-gap coaxial TSVs are introduced. Using the empirical parameters of these coaxial TSVs, the simulation results are obtained demonstrating that these coaxial RF-TSVs can provide two-order higher of cut-off frequencies than the Cu-plug TSVs. Based on these new RF-TSV technologies, we propose a novel 3D multi-core computer system as well as new architectures for manipulating the interfaces between RF and baseband circuit. Taking into consideration the scaling down of IC manufacture technologies, predictions for the performance of future generations of circuits are made. With simulation results indicating energy per bit and area per bit being reduced by 7% and 11% respectively, we can conclude that the proposed method is a worthwhile guideline for the design of future multi-core computer ICs.
基金Supported by the China Postdoctoral Science Foundation(No.2014M552115)the Fundamental Research Funds for the Central Universities,ChinaUniversity of Geosciences(Wuhan)(No.CUGL140833)the National Key Technology Support Program of China(No.2011BAH06B04)
文摘In order to improve the concurrent access performance of the web-based spatial computing system in cluster,a parallel scheduling strategy based on the multi-core environment is proposed,which includes two levels of parallel processing mechanisms.One is that it can evenly allocate tasks to each server node in the cluster and the other is that it can implement the load balancing inside a server node.Based on the strategy,a new web-based spatial computing model is designed in this paper,in which,a task response ratio calculation method,a request queue buffer mechanism and a thread scheduling strategy are focused on.Experimental results show that the new model can fully use the multi-core computing advantage of each server node in the concurrent access environment and improve the average hits per second,average I/O Hits,CPU utilization and throughput.Using speed-up ratio to analyze the traditional model and the new one,the result shows that the new model has the best performance.The performance of the multi-core server nodes in the cluster is optimized;the resource utilization and the parallel processing capabilities are enhanced.The more CPU cores you have,the higher parallel processing capabilities will be obtained.
文摘The first important problem in the star forming process is the formation of proto star core in star forming regions of molecular cloud. The multi core structure in star forming regions is related to the forming of proto star core. The molecular radiation of C 18 O( J = 1-0) in Cepheus C has been observed. The C 18 O( J = 1-0) observations form the basis for an interesting study on the cloud cores and star formation activity in the cores of the Cepheus C. In order to study the multi core structure of C 18 O( J = 1-0) in the Cepheus C the channel maps and the position velocity diagrams of C 18 O( J = 1-0) will be shown. From the maps it is found that the contour level and distribution size of the three cores in Cepheus C are related to the channel velocity very much. The channel velocity of C 18 O( J = 1-0) molecules in core b, which distributed in all the channels velocity, is different with one in core a and core c very much. The C 18 O( J = 1-0) molecules in core a and core c of the Cepheus C mostly distributed in the blue shifted channel velocity relating to peak velocity, and only in -10.0 ~ -9.5 km/s, which is the red shifted channel velocity relating to peak velocity. And the contour level of C 18 O( J = 1-0) in -10.0 ~ -9.5 km/s is small and the distrbution size in the channel map is small. According to the position velocity diagrams the asymmetry of the distribution both blue shifted and red shifted components should reflect the asymmetry of the profile. From the diagrams it also is found that the contour level and the distribution size of the three cores are different from each other. Both results from the maps and diagrams are coincident with each other.
文摘The Long Term Evolution (LTE) system imposes high requirements for dispatching delay.Moreover,very large air interface rate of LTE requires good processing capability for the devices processing the baseband signals.Consequently,the single-core processor cannot meet the requirements of LTE system.This paper analyzes how to use multi-core processors to achieve parallel processing of uplink demodulation and decoding in LTE systems and designs an approach to parallel processing.The test results prove that this approach works quite well.
文摘In this paper, a study related to the expected performance behaviour of present 3-level cache system for multi-core systems is presented. For this a queuing model for present 3-level cache system for multi-core processors is developed and its possible performance has been analyzed with the increase in number of cores. Various important performance parameters like access time and utilization of individual cache at different level and overall average access time of the cache system is determined. Results for up to 1024 cores have been reported in this paper.
文摘Frequent data exchange among all kinds of memories has become an inevitable phenomenon in the process of modern embeddedsoftware design. In order to improve the ability of the embedded system data's throughput and computation, most embeddeddevices introduce Enhanced Direct Memory Access (EDMA) data transfer technology. TMS320C6678 is a multi-core DSPproduced by Texas Instruments (TI). There are ten EDMA transmission controllers in the chip for configuration and datatransmissions are allowed to be performed between any two pieces of storage at the same time. This paper expounds the workingmechanism of EDMA based on multi-core DSP TMS320C6678. At the same time, multiple data sets are provided and thebottleneck of limiting data throughout is analyzed and solved.