1-amino-1-ethylamino-2,2-dinitroethylene (AEFOX-7) was synthesized by the reaction of 1,1- diamino-2,2-dinitroethylene (FOX-7) and ethylamine aqueous solution at 92 ℃. The the- oretical investigation on AEFOX-7 w...1-amino-1-ethylamino-2,2-dinitroethylene (AEFOX-7) was synthesized by the reaction of 1,1- diamino-2,2-dinitroethylene (FOX-7) and ethylamine aqueous solution at 92 ℃. The the- oretical investigation on AEFOX-7 was carried out by B3LYP/6-311++G^** method. The IR frequencies and NMR chemical shifts were performed and compared with the experimental results. The thermal behavior of AEFOX-7 was studied with differential scanning calorimetry and thermal gravity-derivative thermogravimetry methods, and can be divided into a melting process and an exothermic decomposition process. The enthalpy, apparent activation energy and pre-exponential factor of the exothermic decomposition reaction were obtained as 374.88 kJ/mol, 169.7 kJ/mol, and 10^19.24 s^-1, respectively. The critical temperature of thermal explosion of AEFOX-7 is 145.2 ℃. The specific heat capacity of AEFOX-7 was determined with micro-DSC method and theoretical calculation method, and the molar heat capacity is 214.50 J/(mol K) at 298.15 K. The adiabatic time-to-explosion of AEFOX-7 was calculated to be a certain value between 1.38-1.40 s. The thermal stability of AEFOX-7 is much lower than that of FOX-7.展开更多
Traditional two-dimensional(2D) complex resistivity forward modeling is based on Poisson's equation but spectral induced polarization(SIP) data are the coproducts of the induced polarization(IP) and the electro...Traditional two-dimensional(2D) complex resistivity forward modeling is based on Poisson's equation but spectral induced polarization(SIP) data are the coproducts of the induced polarization(IP) and the electromagnetic induction(EMI) effects.This is especially true under high frequencies,where the EMI effect can exceed the IP effect.2D inversion that only considers the IP effect reduces the reliability of the inversion data.In this paper,we derive differential equations using Maxwell's equations.With the introduction of the Cole-Cole model,we use the finite-element method to conduct2 D SIP forward modeling that considers the EMI and IP effects simultaneously.The data-space Occam method,in which different constraints to the model smoothness and parametric boundaries are introduced,is then used to simultaneously obtain the four parameters of the Cole-Cole model using multi-array electric field data.This approach not only improves the stability of the inversion but also significantly reduces the solution ambiguity.To improve the computational efficiency,message passing interface programming was used to accelerate the 2D SIP forward modeling and inversion.Synthetic datasets were tested using both serial and parallel algorithms,and the tests suggest that the proposed parallel algorithm is robust and efficient.展开更多
Parallel machine scheduling problems, which are important discrete optimization problems, may occur in many applications. For example, load balancing in network communication channel assignment, parallel processing in...Parallel machine scheduling problems, which are important discrete optimization problems, may occur in many applications. For example, load balancing in network communication channel assignment, parallel processing in large-size computing, task arrangement in flexible manufacturing systems, etc., are multiprocessor scheduling problem. In the traditional parallel machine scheduling problems, it is assumed that the problems are considered in offline or online environment. But in practice, problems are often not really offline or online but somehow in-between. This means that, with respect to the online problem, some further information about the tasks is available, which allows the improvement of the performance of the best possible algorithms. Problems of this class are called semi-online ones. In this paper, the semi-online problem P2|decr|lp (p>1) is considered where jobs come in non-increasing order of their processing times and the objective is to minimize the sum of the lp norm of every machine’s load. It is shown that LS algorithm is optimal for any lp norm, which extends the results known in the literature. Furthermore, randomized lower bounds for the problems P2|online|lp and P2|decr|lp are presented.展开更多
AIM:To evaluate the relationship between donor safety and remnant liver volume in right lobe living donor liver transplantation(LDLT).METHODS:From July 2001 to January 2009,our liver transplant centers carried out 197...AIM:To evaluate the relationship between donor safety and remnant liver volume in right lobe living donor liver transplantation(LDLT).METHODS:From July 2001 to January 2009,our liver transplant centers carried out 197 LDLTs.The clinical data from 151 cases of adult right lobe living donors(not including the middle hepatic vein) were analyzed.The conditions of the three groups of donors were well matched in terms of the studied parameters.The donors' preoperative data,intraoperative and postoperative data were calculated for the three groups:Group 1 remnant liver volume(RLV) < 35%,group 2 RLV 36%-40%,and group 3 RLV > 40%.Comparisons included the different remnant liver volumes on postoperative liver function recovery and the impact of systemic conditions.Correlations between remnant liver volume and post-operative complications were also analyzed.RESULTS:The donors' anthroposomatology data,op-eration time,and preoperative donor blood test indicators were calculated for the three groups.No significant differences were observed between the donors' gender,age,height,weight,and operation time.According to the Chengdu standard liver volume formula,the total liver volume of group 1 was 1072.88 ± 131.06 mL,group 2 was 1043.84 ± 97.11 mL,and group 3 was 1065.33 ± 136.02 mL.The three groups showed no statistically significant differences.When the volume of the remnant liver was less than 35% of the total liver volume,the volume of the remnant had a significant effect on the recovery of liver function and intensive care unit time.In addition,the occurrence of complications was closely related to the remnant liver volume.When the volume of the remnant liver was more than 35% of the total liver volume,the remnant volume change had no significant effect on donor recovery.CONCLUSION:To ensure donor safety,the remnant liver volume should be greater than the standard liver volume(35%) in right lobe living donor liver transplantation.展开更多
To improve the inversion accuracy of time-domain airborne electromagnetic data, we propose a parallel 3D inversion algorithm for airborne EM data based on the direct Gauss-Newton optimization. Forward modeling is perf...To improve the inversion accuracy of time-domain airborne electromagnetic data, we propose a parallel 3D inversion algorithm for airborne EM data based on the direct Gauss-Newton optimization. Forward modeling is performed in the frequency domain based on the scattered secondary electrical field. Then, the inverse Fourier transform and convolution of the transmitting waveform are used to calculate the EM responses and the sensitivity matrix in the time domain for arbitrary transmitting waves. To optimize the computational time and memory requirements, we use the EM "footprint" concept to reduce the model size and obtain the sparse sensitivity matrix. To improve the 3D inversion, we use the OpenMP library and parallel computing. We test the proposed 3D parallel inversion code using two synthetic datasets and a field dataset. The time-domain airborne EM inversion results suggest that the proposed algorithm is effective, efficient, and practical.展开更多
Parallel machine problems with a single server and release times are generalizations of classical parallel machine problems. Before processing, each job must be loaded on a machine, which takes a certain release times...Parallel machine problems with a single server and release times are generalizations of classical parallel machine problems. Before processing, each job must be loaded on a machine, which takes a certain release times and a certain setup times. All these setups have to be done by a single server, which can handle at most one job at a time. In this paper, we continue studying the complexity result for parallel machine problem with a single and release times. New complexity results are derived for special cases.展开更多
Traditional human detection using pre-trained detectors tends to be computationally intensive for time-critical tracking tasks, and the detection rate is prone to be unsatisfying when occlusion, motion blur and body d...Traditional human detection using pre-trained detectors tends to be computationally intensive for time-critical tracking tasks, and the detection rate is prone to be unsatisfying when occlusion, motion blur and body deformation occur frequently. A spatial-confidential proposal filtering method(SCPF) is proposed for efficient and accurate human detection. It consists of two filtering phases: spatial proposal filtering and confidential proposal filtering. A compact spatial proposal is generated in the first phase to minimize the search space to reduce the computation cost. The human detector only estimates the confidence scores of the candidate search regions accepted by the spatial proposal instead of global scanning. At the second phase, each candidate search region is assigned with a supplementary confidence score according to their reliability estimated by the confidential proposal to reduce missing detections. The performance of the SCPF method is verified by extensive tests on several video sequences from available public datasets. Both quantitatively and qualitatively experimental results indicate that the proposed method can highly improve the efficiency and the accuracy of human detection.展开更多
According to the basic requirements of underground mine personnel position systems and the working characteristics of active RFID tags,we studied the cause of concurrent collision of RFID tags and leak reading probabi...According to the basic requirements of underground mine personnel position systems and the working characteristics of active RFID tags,we studied the cause of concurrent collision of RFID tags and leak reading probability,by means of theoretical analysis and computation.The result shows that the probability of wireless collision increases linearly with an increase in the number of tags.The probability of collision and leak reading can be reduced by extending the working period of the duty cycle and using a backoff algorithm.In a practical application,a working schedule for available labels has been designed according to the requirement of the project.展开更多
The coupled models of LBM (Lattice Boltzmann Method) and RANS (Reynolds-Averaged Navier-Stokes) are more practical for the transient simulation of mixing processes at large spatial and temporal scales such as crud...The coupled models of LBM (Lattice Boltzmann Method) and RANS (Reynolds-Averaged Navier-Stokes) are more practical for the transient simulation of mixing processes at large spatial and temporal scales such as crude oil mixing in large-diameter storage tanks. To keep the efficiency of parallel computation of LBM, the RANS model should also be explicitly solved; whereas to keep the numerical stability the implicit method should be better for PANS model. This article explores the numerical stability of explicit methods in 2D cases on one hand, and on the other hand how to accelerate the computation of the coupled model of LBM and an implicitly solved RANS model in 3D cases. To ensure the numerical stability and meanwhile avoid the use of empirical artificial lim- itations on turbulent quantities in 2D cases, we investigated the impacts of collision models in LBM (LBGK, MRT) and the numerical schemes for convection terms (WENO, TVD) and production terms (FDM, NEQM) in an explic- itly solved standard k-e model. The combination of MRT and TVD or MRT and NEQM can be screened out for the 2D simulation of backward-facing step flow even at Re = 107. This scheme combination, however, may still not guarantee the numerical stability in 3D cases and hence much finer grids are required, which is not suitable for the simulation of industrial-scale processes.Then we proposed a new method to accelerate the coupled model of LBM with RANS (implicitly solved). When implemented on multiple GPUs, this new method can achieve 13.5-fold accelera- tion relative to the original coupled model and 40-fold acceleration compared to the traditional CFD simulation based on Finite Volume (FV) method accelerated by multiple CPUs. This study provides the basis for the transient flow simulation of larger spatial and temporal scales in industrial applications with LBM-RANS methods.展开更多
This work explores three patterns of occupants’ control of window blinds and the potential influence on daylight performance of an office room in a tropical climate. In this climate, windows are frequently obstructe...This work explores three patterns of occupants’ control of window blinds and the potential influence on daylight performance of an office room in a tropical climate. In this climate, windows are frequently obstructed by curtains to avoid glare, despite the daylighting and the exterior view. The consequences are obstructed outside view, poor daylight quality and dependency on artificial lighting. This paper assesses the impact on available daylight using parametric analysis based on daylighting dynamic computer simulations using Grasshopper and Daysim software, combining WWR (window-to-wall ratio) (40% and 80%), SVF (sky view factor) (small and large) and occupant behavior (active, intermediate and passive users). The user patterns are based in an office buildings survey that identifies preferences concerning daylight use and control of shading devices. The daylight performance criteria combine UDI (useful daylight illuminance) (500-5,000 lux) and illuminance uniformity distribution. Results confirm the impact of occupant behavior on daylighting performance. The optimum combination of external shading devices, high SVF and high window size results in a useful daylighting for 1/3 of the time for passive users and 2/3 for active users.展开更多
To reduce resources consumption of parallel computation system, a static task scheduling opti- mization method based on hybrid genetic algorithm is proposed and validated, which can shorten the scheduling length of pa...To reduce resources consumption of parallel computation system, a static task scheduling opti- mization method based on hybrid genetic algorithm is proposed and validated, which can shorten the scheduling length of parallel tasks with precedence constraints. Firstly, the global optimal model and constraints are created to demonstrate the static task scheduling problem in heterogeneous distributed computing systems(HeDCSs). Secondly, the genetic population is coded with matrix and used to search the total available time span of the processors, and then the simulated annealing algorithm is introduced to improve the convergence speed and overcome the problem of easily falling into local minimum point, which exists in the traditional genetic algorithm. Finally, compared to other existed scheduling algorithms such as dynamic level scheduling ( DLS), heterogeneous earliest finish time (HEFr), and longest dynamic critical path( LDCP), the proposed approach does not merely de- crease tasks schedule length, but also achieves the maximal resource utilization of parallel computa- tion system by extensive experiments.展开更多
In order to improve the concurrent access performance of the web-based spatial computing system in cluster,a parallel scheduling strategy based on the multi-core environment is proposed,which includes two levels of pa...In order to improve the concurrent access performance of the web-based spatial computing system in cluster,a parallel scheduling strategy based on the multi-core environment is proposed,which includes two levels of parallel processing mechanisms.One is that it can evenly allocate tasks to each server node in the cluster and the other is that it can implement the load balancing inside a server node.Based on the strategy,a new web-based spatial computing model is designed in this paper,in which,a task response ratio calculation method,a request queue buffer mechanism and a thread scheduling strategy are focused on.Experimental results show that the new model can fully use the multi-core computing advantage of each server node in the concurrent access environment and improve the average hits per second,average I/O Hits,CPU utilization and throughput.Using speed-up ratio to analyze the traditional model and the new one,the result shows that the new model has the best performance.The performance of the multi-core server nodes in the cluster is optimized;the resource utilization and the parallel processing capabilities are enhanced.The more CPU cores you have,the higher parallel processing capabilities will be obtained.展开更多
The authors propose a numerical algorithm for the two-dimensional Navier-Stokes equations written in stream function-vorticity formulation. The total time derivative term is treated with a first order characteristics ...The authors propose a numerical algorithm for the two-dimensional Navier-Stokes equations written in stream function-vorticity formulation. The total time derivative term is treated with a first order characteristics method. The space approximation is based on a piecewise continuous finite element method. The proposed algorithm is used to simulate the mechanical aeration process in lakes. Such process is used to combat the degradation of the water quality due to the eutrophication phenomena. For this application high computing facilities and capacities are required. In order to optimize the computing time and make possible the simulation of real applications, the authors propose a parallel implementation of the numerical algorithm. The parallelization technique is performed using the Message Passing Interface. The efficiency of the proposed numerical algorithm is illustrated by some numerical results.展开更多
High performance computing(HPC)is a powerful tool to accelerate the Kohn–Sham density functional theory(KS-DFT)calculations on modern heterogeneous supercomputers.Here,we describe a massively parallel implementation ...High performance computing(HPC)is a powerful tool to accelerate the Kohn–Sham density functional theory(KS-DFT)calculations on modern heterogeneous supercomputers.Here,we describe a massively parallel implementation of discontinuous Galerkin density functional theory(DGDFT)method on the Sunway Taihu Light supercomputer.The DGDFT method uses the adaptive local basis(ALB)functions generated on-the-fly during the self-consistent field(SCF)iteration to solve the KS equations with high precision comparable to plane-wave basis set.In particular,the DGDFT method adopts a two-level parallelization strategy that deals with various types of data distribution,task scheduling,and data communication schemes,and combines with the master–slave multi-thread heterogeneous parallelism of SW26010 processor,resulting in large-scale HPC KS-DFT calculations on the Sunway Taihu Light supercomputer.We show that the DGDFT method can scale up to 8,519,680 processing cores(131,072 core groups)on the Sunway Taihu Light supercomputer for studying the electronic structures of twodimensional(2 D)metallic graphene systems that contain tens of thousands of carbon atoms.展开更多
The line segment intersection problem is one of the basic problems in computational geometry and has been widely used in spatial analysis in Geographic Information Systems (GIS). Lots of traditional algorithms study...The line segment intersection problem is one of the basic problems in computational geometry and has been widely used in spatial analysis in Geographic Information Systems (GIS). Lots of traditional algorithms study the problem in a serial environment. However, in GIS, a spatial object is much more complicated and is considered to be always composed of multiple line segments, and one line segment connects another line segment at its endpoint. On the other hand, along with the advances made in computer hardware, more and more personal computers have multiple cores or CPUs equipped. Thus, to make full use of the increasing computing resources, parallel technique is applied as one of the most available methods. Apparently, the traditional algorithms should be improved to take advantage of the technologies. Under these circumstances, based on the modified uniform grid algorithm, which is adapted to dealing with spatial objects in GIS, this paper proposes a parallel strategy in a shared memory architecture. Also, experimental results are given in the final part of this paper to demonstrate the efficiency this strategy brings.展开更多
In this paper we study the algorithms and their parallel implementation for solving large-scale generalized eigenvalue problems in modal analysis.Three predominant subspace algorithms,i.e.,Krylov-Schur method,implicit...In this paper we study the algorithms and their parallel implementation for solving large-scale generalized eigenvalue problems in modal analysis.Three predominant subspace algorithms,i.e.,Krylov-Schur method,implicitly restarted Arnoldi method and Jacobi-Davidson method,are modified with some complementary techniques to make them suitable for modal analysis.Detailed descriptions of the three algorithms are given.Based on these algorithms,a parallel solution procedure is established via the PANDA framework and its associated eigensolvers.Using the solution procedure on a machine equipped with up to 4800processors,the parallel performance of the three predominant methods is evaluated via numerical experiments with typical engineering structures,where the maximum testing scale attains twenty million degrees of freedom.The speedup curves for different cases are obtained and compared.The results show that the three methods are good for modal analysis in the scale of ten million degrees of freedom with a favorable parallel scalability.展开更多
Systems describing the dynamics of proliferative and quiescent cells are commonly used as computational models, for instance for tumor growth and hematopoiesis.Focusing on the very earliest stages of hematopoiesis, st...Systems describing the dynamics of proliferative and quiescent cells are commonly used as computational models, for instance for tumor growth and hematopoiesis.Focusing on the very earliest stages of hematopoiesis, stem cells and early progenitors, the authors introduce a new method, based on an energy/Lyapunov functional to analyze the long time behavior of solutions. Compared to existing works, the method in this paper has the advantage that it can be extended to more complex situations. The authors treat a system with space variable and diffusion, and then adapt the energy functional to models with three equations.展开更多
文摘1-amino-1-ethylamino-2,2-dinitroethylene (AEFOX-7) was synthesized by the reaction of 1,1- diamino-2,2-dinitroethylene (FOX-7) and ethylamine aqueous solution at 92 ℃. The the- oretical investigation on AEFOX-7 was carried out by B3LYP/6-311++G^** method. The IR frequencies and NMR chemical shifts were performed and compared with the experimental results. The thermal behavior of AEFOX-7 was studied with differential scanning calorimetry and thermal gravity-derivative thermogravimetry methods, and can be divided into a melting process and an exothermic decomposition process. The enthalpy, apparent activation energy and pre-exponential factor of the exothermic decomposition reaction were obtained as 374.88 kJ/mol, 169.7 kJ/mol, and 10^19.24 s^-1, respectively. The critical temperature of thermal explosion of AEFOX-7 is 145.2 ℃. The specific heat capacity of AEFOX-7 was determined with micro-DSC method and theoretical calculation method, and the molar heat capacity is 214.50 J/(mol K) at 298.15 K. The adiabatic time-to-explosion of AEFOX-7 was calculated to be a certain value between 1.38-1.40 s. The thermal stability of AEFOX-7 is much lower than that of FOX-7.
基金jointly sponsored by the National Natural Science Foundation of China(Grant No.41374078)the Geological Survey Projects of the Ministry of Land and Resources of China(Grant Nos.12120113086100 and 12120113101300)Beijing Higher Education Young Elite Teacher Project
文摘Traditional two-dimensional(2D) complex resistivity forward modeling is based on Poisson's equation but spectral induced polarization(SIP) data are the coproducts of the induced polarization(IP) and the electromagnetic induction(EMI) effects.This is especially true under high frequencies,where the EMI effect can exceed the IP effect.2D inversion that only considers the IP effect reduces the reliability of the inversion data.In this paper,we derive differential equations using Maxwell's equations.With the introduction of the Cole-Cole model,we use the finite-element method to conduct2 D SIP forward modeling that considers the EMI and IP effects simultaneously.The data-space Occam method,in which different constraints to the model smoothness and parametric boundaries are introduced,is then used to simultaneously obtain the four parameters of the Cole-Cole model using multi-array electric field data.This approach not only improves the stability of the inversion but also significantly reduces the solution ambiguity.To improve the computational efficiency,message passing interface programming was used to accelerate the 2D SIP forward modeling and inversion.Synthetic datasets were tested using both serial and parallel algorithms,and the tests suggest that the proposed parallel algorithm is robust and efficient.
基金Project supported by the National Natural Science Foundation of China (Nos. 10271110 10301028) and the Teaching and Research Award Program for Outstanding Young Teachers in Higher Education Institutions of MOE+2 种基金 China Project supported by the National Natural Science Foundation of China (Nos. 10271110 10301028) and the Teaching and Research Award Program for Outstanding Young Teachers in Higher Education Institutions of MOE China
文摘Parallel machine scheduling problems, which are important discrete optimization problems, may occur in many applications. For example, load balancing in network communication channel assignment, parallel processing in large-size computing, task arrangement in flexible manufacturing systems, etc., are multiprocessor scheduling problem. In the traditional parallel machine scheduling problems, it is assumed that the problems are considered in offline or online environment. But in practice, problems are often not really offline or online but somehow in-between. This means that, with respect to the online problem, some further information about the tasks is available, which allows the improvement of the performance of the best possible algorithms. Problems of this class are called semi-online ones. In this paper, the semi-online problem P2|decr|lp (p>1) is considered where jobs come in non-increasing order of their processing times and the objective is to minimize the sum of the lp norm of every machine’s load. It is shown that LS algorithm is optimal for any lp norm, which extends the results known in the literature. Furthermore, randomized lower bounds for the problems P2|online|lp and P2|decr|lp are presented.
文摘AIM:To evaluate the relationship between donor safety and remnant liver volume in right lobe living donor liver transplantation(LDLT).METHODS:From July 2001 to January 2009,our liver transplant centers carried out 197 LDLTs.The clinical data from 151 cases of adult right lobe living donors(not including the middle hepatic vein) were analyzed.The conditions of the three groups of donors were well matched in terms of the studied parameters.The donors' preoperative data,intraoperative and postoperative data were calculated for the three groups:Group 1 remnant liver volume(RLV) < 35%,group 2 RLV 36%-40%,and group 3 RLV > 40%.Comparisons included the different remnant liver volumes on postoperative liver function recovery and the impact of systemic conditions.Correlations between remnant liver volume and post-operative complications were also analyzed.RESULTS:The donors' anthroposomatology data,op-eration time,and preoperative donor blood test indicators were calculated for the three groups.No significant differences were observed between the donors' gender,age,height,weight,and operation time.According to the Chengdu standard liver volume formula,the total liver volume of group 1 was 1072.88 ± 131.06 mL,group 2 was 1043.84 ± 97.11 mL,and group 3 was 1065.33 ± 136.02 mL.The three groups showed no statistically significant differences.When the volume of the remnant liver was less than 35% of the total liver volume,the volume of the remnant had a significant effect on the recovery of liver function and intensive care unit time.In addition,the occurrence of complications was closely related to the remnant liver volume.When the volume of the remnant liver was more than 35% of the total liver volume,the remnant volume change had no significant effect on donor recovery.CONCLUSION:To ensure donor safety,the remnant liver volume should be greater than the standard liver volume(35%) in right lobe living donor liver transplantation.
基金supported by the Key Natural Science Foundation(No.41530320)Natural Science Foundation(No.41274121)+1 种基金Natural Science Foundation for young scientist(No.41404093)the Projects on the Development of the Key Equipment of Chinese Academy of Science(No.ZDYZ2012-1-03)
文摘To improve the inversion accuracy of time-domain airborne electromagnetic data, we propose a parallel 3D inversion algorithm for airborne EM data based on the direct Gauss-Newton optimization. Forward modeling is performed in the frequency domain based on the scattered secondary electrical field. Then, the inverse Fourier transform and convolution of the transmitting waveform are used to calculate the EM responses and the sensitivity matrix in the time domain for arbitrary transmitting waves. To optimize the computational time and memory requirements, we use the EM "footprint" concept to reduce the model size and obtain the sparse sensitivity matrix. To improve the 3D inversion, we use the OpenMP library and parallel computing. We test the proposed 3D parallel inversion code using two synthetic datasets and a field dataset. The time-domain airborne EM inversion results suggest that the proposed algorithm is effective, efficient, and practical.
文摘Parallel machine problems with a single server and release times are generalizations of classical parallel machine problems. Before processing, each job must be loaded on a machine, which takes a certain release times and a certain setup times. All these setups have to be done by a single server, which can handle at most one job at a time. In this paper, we continue studying the complexity result for parallel machine problem with a single and release times. New complexity results are derived for special cases.
基金Projects(61175096,60772063)supported by the National Natural Science Foundation of China
文摘Traditional human detection using pre-trained detectors tends to be computationally intensive for time-critical tracking tasks, and the detection rate is prone to be unsatisfying when occlusion, motion blur and body deformation occur frequently. A spatial-confidential proposal filtering method(SCPF) is proposed for efficient and accurate human detection. It consists of two filtering phases: spatial proposal filtering and confidential proposal filtering. A compact spatial proposal is generated in the first phase to minimize the search space to reduce the computation cost. The human detector only estimates the confidence scores of the candidate search regions accepted by the spatial proposal instead of global scanning. At the second phase, each candidate search region is assigned with a supplementary confidence score according to their reliability estimated by the confidential proposal to reduce missing detections. The performance of the SCPF method is verified by extensive tests on several video sequences from available public datasets. Both quantitatively and qualitatively experimental results indicate that the proposed method can highly improve the efficiency and the accuracy of human detection.
基金supported by the Fund of Coal Gas Sensing Technology and Early Warning Systems-Based Theory and Key Technology Research (No.50534050)
文摘According to the basic requirements of underground mine personnel position systems and the working characteristics of active RFID tags,we studied the cause of concurrent collision of RFID tags and leak reading probability,by means of theoretical analysis and computation.The result shows that the probability of wireless collision increases linearly with an increase in the number of tags.The probability of collision and leak reading can be reduced by extending the working period of the duty cycle and using a backoff algorithm.In a practical application,a working schedule for available labels has been designed according to the requirement of the project.
基金Supported by the National Key Research and Development Program of China(2017YFB0602500)National Natural Science Foundation of China(91634203 and91434121)Chinese Academy of Sciences(122111KYSB20150003)
文摘The coupled models of LBM (Lattice Boltzmann Method) and RANS (Reynolds-Averaged Navier-Stokes) are more practical for the transient simulation of mixing processes at large spatial and temporal scales such as crude oil mixing in large-diameter storage tanks. To keep the efficiency of parallel computation of LBM, the RANS model should also be explicitly solved; whereas to keep the numerical stability the implicit method should be better for PANS model. This article explores the numerical stability of explicit methods in 2D cases on one hand, and on the other hand how to accelerate the computation of the coupled model of LBM and an implicitly solved RANS model in 3D cases. To ensure the numerical stability and meanwhile avoid the use of empirical artificial lim- itations on turbulent quantities in 2D cases, we investigated the impacts of collision models in LBM (LBGK, MRT) and the numerical schemes for convection terms (WENO, TVD) and production terms (FDM, NEQM) in an explic- itly solved standard k-e model. The combination of MRT and TVD or MRT and NEQM can be screened out for the 2D simulation of backward-facing step flow even at Re = 107. This scheme combination, however, may still not guarantee the numerical stability in 3D cases and hence much finer grids are required, which is not suitable for the simulation of industrial-scale processes.Then we proposed a new method to accelerate the coupled model of LBM with RANS (implicitly solved). When implemented on multiple GPUs, this new method can achieve 13.5-fold accelera- tion relative to the original coupled model and 40-fold acceleration compared to the traditional CFD simulation based on Finite Volume (FV) method accelerated by multiple CPUs. This study provides the basis for the transient flow simulation of larger spatial and temporal scales in industrial applications with LBM-RANS methods.
文摘This work explores three patterns of occupants’ control of window blinds and the potential influence on daylight performance of an office room in a tropical climate. In this climate, windows are frequently obstructed by curtains to avoid glare, despite the daylighting and the exterior view. The consequences are obstructed outside view, poor daylight quality and dependency on artificial lighting. This paper assesses the impact on available daylight using parametric analysis based on daylighting dynamic computer simulations using Grasshopper and Daysim software, combining WWR (window-to-wall ratio) (40% and 80%), SVF (sky view factor) (small and large) and occupant behavior (active, intermediate and passive users). The user patterns are based in an office buildings survey that identifies preferences concerning daylight use and control of shading devices. The daylight performance criteria combine UDI (useful daylight illuminance) (500-5,000 lux) and illuminance uniformity distribution. Results confirm the impact of occupant behavior on daylighting performance. The optimum combination of external shading devices, high SVF and high window size results in a useful daylighting for 1/3 of the time for passive users and 2/3 for active users.
基金Supported by the National Natural Science Foundation of China(No.61401496)
文摘To reduce resources consumption of parallel computation system, a static task scheduling opti- mization method based on hybrid genetic algorithm is proposed and validated, which can shorten the scheduling length of parallel tasks with precedence constraints. Firstly, the global optimal model and constraints are created to demonstrate the static task scheduling problem in heterogeneous distributed computing systems(HeDCSs). Secondly, the genetic population is coded with matrix and used to search the total available time span of the processors, and then the simulated annealing algorithm is introduced to improve the convergence speed and overcome the problem of easily falling into local minimum point, which exists in the traditional genetic algorithm. Finally, compared to other existed scheduling algorithms such as dynamic level scheduling ( DLS), heterogeneous earliest finish time (HEFr), and longest dynamic critical path( LDCP), the proposed approach does not merely de- crease tasks schedule length, but also achieves the maximal resource utilization of parallel computa- tion system by extensive experiments.
基金Supported by the China Postdoctoral Science Foundation(No.2014M552115)the Fundamental Research Funds for the Central Universities,ChinaUniversity of Geosciences(Wuhan)(No.CUGL140833)the National Key Technology Support Program of China(No.2011BAH06B04)
文摘In order to improve the concurrent access performance of the web-based spatial computing system in cluster,a parallel scheduling strategy based on the multi-core environment is proposed,which includes two levels of parallel processing mechanisms.One is that it can evenly allocate tasks to each server node in the cluster and the other is that it can implement the load balancing inside a server node.Based on the strategy,a new web-based spatial computing model is designed in this paper,in which,a task response ratio calculation method,a request queue buffer mechanism and a thread scheduling strategy are focused on.Experimental results show that the new model can fully use the multi-core computing advantage of each server node in the concurrent access environment and improve the average hits per second,average I/O Hits,CPU utilization and throughput.Using speed-up ratio to analyze the traditional model and the new one,the result shows that the new model has the best performance.The performance of the multi-core server nodes in the cluster is optimized;the resource utilization and the parallel processing capabilities are enhanced.The more CPU cores you have,the higher parallel processing capabilities will be obtained.
文摘The authors propose a numerical algorithm for the two-dimensional Navier-Stokes equations written in stream function-vorticity formulation. The total time derivative term is treated with a first order characteristics method. The space approximation is based on a piecewise continuous finite element method. The proposed algorithm is used to simulate the mechanical aeration process in lakes. Such process is used to combat the degradation of the water quality due to the eutrophication phenomena. For this application high computing facilities and capacities are required. In order to optimize the computing time and make possible the simulation of real applications, the authors propose a parallel implementation of the numerical algorithm. The parallelization technique is performed using the Message Passing Interface. The efficiency of the proposed numerical algorithm is illustrated by some numerical results.
基金partly supported by the Supercomputer Application Project Trail Funding from Wuxi Jiangnan Institute of Computing Technology(BB2340000016)the Strategic Priority Research Program of Chinese Academy of Sciences(XDC01040100)+6 种基金the National Natural Science Foundation of China(21688102,21803066)the Anhui Initiative in Quantum Information Technologies(AHY090400)the National Key Research and Development Program of China(2016YFA0200604)the Fundamental Research Funds for Central Universities(WK2340000091)the Chinese Academy of Sciences Pioneer Hundred Talents Program(KJ2340000031)the Research Start-Up Grants(KY2340000094)the Academic Leading Talents Training Program(KY2340000103)from University of Science and Technology of China。
文摘High performance computing(HPC)is a powerful tool to accelerate the Kohn–Sham density functional theory(KS-DFT)calculations on modern heterogeneous supercomputers.Here,we describe a massively parallel implementation of discontinuous Galerkin density functional theory(DGDFT)method on the Sunway Taihu Light supercomputer.The DGDFT method uses the adaptive local basis(ALB)functions generated on-the-fly during the self-consistent field(SCF)iteration to solve the KS equations with high precision comparable to plane-wave basis set.In particular,the DGDFT method adopts a two-level parallelization strategy that deals with various types of data distribution,task scheduling,and data communication schemes,and combines with the master–slave multi-thread heterogeneous parallelism of SW26010 processor,resulting in large-scale HPC KS-DFT calculations on the Sunway Taihu Light supercomputer.We show that the DGDFT method can scale up to 8,519,680 processing cores(131,072 core groups)on the Sunway Taihu Light supercomputer for studying the electronic structures of twodimensional(2 D)metallic graphene systems that contain tens of thousands of carbon atoms.
文摘The line segment intersection problem is one of the basic problems in computational geometry and has been widely used in spatial analysis in Geographic Information Systems (GIS). Lots of traditional algorithms study the problem in a serial environment. However, in GIS, a spatial object is much more complicated and is considered to be always composed of multiple line segments, and one line segment connects another line segment at its endpoint. On the other hand, along with the advances made in computer hardware, more and more personal computers have multiple cores or CPUs equipped. Thus, to make full use of the increasing computing resources, parallel technique is applied as one of the most available methods. Apparently, the traditional algorithms should be improved to take advantage of the technologies. Under these circumstances, based on the modified uniform grid algorithm, which is adapted to dealing with spatial objects in GIS, this paper proposes a parallel strategy in a shared memory architecture. Also, experimental results are given in the final part of this paper to demonstrate the efficiency this strategy brings.
基金supported by the National Defence Basic Fundamental Research Program of China(Grant No.C1520110002)the Fundamental Development Foundation of China Academy Engineering Physics(Grant No.2012A0202008)
文摘In this paper we study the algorithms and their parallel implementation for solving large-scale generalized eigenvalue problems in modal analysis.Three predominant subspace algorithms,i.e.,Krylov-Schur method,implicitly restarted Arnoldi method and Jacobi-Davidson method,are modified with some complementary techniques to make them suitable for modal analysis.Detailed descriptions of the three algorithms are given.Based on these algorithms,a parallel solution procedure is established via the PANDA framework and its associated eigensolvers.Using the solution procedure on a machine equipped with up to 4800processors,the parallel performance of the three predominant methods is evaluated via numerical experiments with typical engineering structures,where the maximum testing scale attains twenty million degrees of freedom.The speedup curves for different cases are obtained and compared.The results show that the three methods are good for modal analysis in the scale of ten million degrees of freedom with a favorable parallel scalability.
基金supported by the European Research Council(ERC) under the European Union’s Horizon 2020 Research and Innovation Programme(No.740623)
文摘Systems describing the dynamics of proliferative and quiescent cells are commonly used as computational models, for instance for tumor growth and hematopoiesis.Focusing on the very earliest stages of hematopoiesis, stem cells and early progenitors, the authors introduce a new method, based on an energy/Lyapunov functional to analyze the long time behavior of solutions. Compared to existing works, the method in this paper has the advantage that it can be extended to more complex situations. The authors treat a system with space variable and diffusion, and then adapt the energy functional to models with three equations.