Injection of water to enhance oil production is commonplace, and improvements in understanding the process are economically important. This study examines predictive models of the injection-to-production ratio. First...Injection of water to enhance oil production is commonplace, and improvements in understanding the process are economically important. This study examines predictive models of the injection-to-production ratio. Firstly, the error between the fitting and actual injection-production ratio is calculated with such methods as the injection-production ratio and water-oil ratio method, the material balance method, the multiple regression method, the gray theory GM (1,1) model and the back-propogation (BP) neural network method by computer applications in this paper. The relative average errors calculated are respectively 1.67%, 1.08%, 19.2%, 1.38% and 0.88%. Secondly, the reasons for the errors from different prediction methods are analyzed theoretically, indicating that the prediction precision of the BP neural network method is high, and that it has a better self-adaptability, so that it can reflect the internal relationship between the injection-production ratio and the influencing factors. Therefore, the BP neural network method is suitable to the prediction of injection-production ratio.展开更多
The design of an FPGA( field programmable gate array) based programmable SONET (synchronous optical network) OC-192 10 Gbit/s PRBS (pseudo-random binary sequence) generator and a bit interleaved polarity 8 (BI...The design of an FPGA( field programmable gate array) based programmable SONET (synchronous optical network) OC-192 10 Gbit/s PRBS (pseudo-random binary sequence) generator and a bit interleaved polarity 8 (BIP-8) error detector is presented. Implemented in a parallel feedback configuration, this tester features PRBS generation of sequences with bit lengths of 2^7 - 1,2^10- 1,2^15 - 1,2^23 - land 2^31 - 1 for up to 10 Gbit/s applications with a 10 Gbit/s optical transceiver, via the SFI-4 (OC-192 serdes-framer interface). In the OC-192 frame alignment circuit, a dichotomy search algorithm logic which performs the functions of word alignment and STM-64/OC192 de-frame speeds up the frame sync logic and reduces circuit complexity greatly. The system can be used as a low cost tester to evaluate the performance of OC-192 devices and components, taking the replacement of precious commercial PRBS testers.展开更多
Hyperexcitability of neural network is a key neurophysiological mechanism in several neurological disorders including epilepsy, neuropathic pain, and tinnitus. Although standard paradigm of pharmacological management ...Hyperexcitability of neural network is a key neurophysiological mechanism in several neurological disorders including epilepsy, neuropathic pain, and tinnitus. Although standard paradigm of pharmacological management of them is to suppress this hyperexcitability, such as having been exemplified by the use of certain antiepileptic drugs, their frequent refractoriness to drug treatment suggests likely different pathophysiological mechanism. Because the pathogenesis in these disorders exhibits a transition from an initial activity loss after injury or sensory deprivation to subsequent hyperexcitability and paroxysmal discharges, this process can be regarded as a process of functional compensation similar to homeostatic plasticity regulation, in which a set level of activity in neural network is maintained after injury-induced activity loss through enhanced network excitability. Enhancing brain activity, such as cortical stimulation that is found to be effective in relieving symptoms of these disorders, may reduce such hyperexcitability through homeostatic plasticity mechanism. Here we review current evidence of homeostatic plasticity in the mechanism of acquired epilepsy, neuropathic pain, and tinnitus and the effects and mechanism of cortical stimulation. Establishing a role of homeostatic plasticity in these disorders may provide a theoretical basis on their pathogenesis as well as guide the development and application of therapeutic approaches through electrically or pharmacologically stimulating brain activity for treating these disorders.展开更多
MapReduce is a popular program- ming model for processing large-scale datasets in a distributed environment and is a funda- mental component of current cloud comput- ing and big data applications. In this paper, a hea...MapReduce is a popular program- ming model for processing large-scale datasets in a distributed environment and is a funda- mental component of current cloud comput- ing and big data applications. In this paper, a heartbeat mechanism for MapReduce Task Scheduler using Dynamic Calibration (HMTS- DC) is proposed to address the unbalanced node computation capacity problem in a het- erogeneous MapReduce environment. HMTS- DC uses two mechanisms to dynamically adapt and balance tasks assigned to each com- pute node: 1) using heartbeat to dynamically estimate the capacity of the compute nodes, and 2) using data locality of replicated data blocks to reduce data transfer between nodes. With the first mechanism, based on the heart- beats received during the early state of the job, the task scheduler can dynamically estimate the computational capacity of each node. Us- ing the second mechanism, unprocessed Tasks local to each compute node are reassigned and reserved to allow nodes with greater capacities to reserve more local tasks than their weaker counterparts. Experimental results show that HMTS-DC performs better than Hadoop and Dynamic Data Placement Strategy (DDP) in a dynamic environment. Furthermore, an en- hanced HMTS-DC (EHMTS-DC) is proposed bv incorporatin historical data. In contrastto the "slow start" property of HMTS-DC, EHMTS-DC relies on the historical computation capacity of the slave machines. The experimental results show that EHMTS-DC outperforms HMTS-DC in a dynamic environment.展开更多
Analytic Hierarchy Process (AHP) method can be used to solve the tasks of multi-criterion decision system fields, but some complicated questions processed by AHP cannot be easily solved by means of the general method....Analytic Hierarchy Process (AHP) method can be used to solve the tasks of multi-criterion decision system fields, but some complicated questions processed by AHP cannot be easily solved by means of the general method. It is because of being unsatisfied with consistency condition or judgment matrix too intricate to solve, which causes AHP invalidation. So in order to resolve this problem, AHP knowledge systems reduced with the aid of Genetic Algorithms (GA) were proposed, which directly acquired the order of AHP issue through the rule of Rough Sets Theory (RST) method, or solved the tasks reduced by RST with classical AHP method. On this condition, the compare decision system of region informatization level was solved, and the results solved were the same to those by classical AHP, which denoted that this method was more simple and reliable, besides the four rules of changing AHP system into RST Decision System.展开更多
Parallel machine scheduling problems, which are important discrete optimization problems, may occur in many applications. For example, load balancing in network communication channel assignment, parallel processing in...Parallel machine scheduling problems, which are important discrete optimization problems, may occur in many applications. For example, load balancing in network communication channel assignment, parallel processing in large-size computing, task arrangement in flexible manufacturing systems, etc., are multiprocessor scheduling problem. In the traditional parallel machine scheduling problems, it is assumed that the problems are considered in offline or online environment. But in practice, problems are often not really offline or online but somehow in-between. This means that, with respect to the online problem, some further information about the tasks is available, which allows the improvement of the performance of the best possible algorithms. Problems of this class are called semi-online ones. In this paper, the semi-online problem P2|decr|lp (p>1) is considered where jobs come in non-increasing order of their processing times and the objective is to minimize the sum of the lp norm of every machine’s load. It is shown that LS algorithm is optimal for any lp norm, which extends the results known in the literature. Furthermore, randomized lower bounds for the problems P2|online|lp and P2|decr|lp are presented.展开更多
In a CPM network, the longest path problem is one of the most important subjects. According to the intrinsic principle of CPM network, the length of the paths between arbitrary two nodes is presented. Furthermore, the...In a CPM network, the longest path problem is one of the most important subjects. According to the intrinsic principle of CPM network, the length of the paths between arbitrary two nodes is presented. Furthermore, the length of the longest path from start node to arbitrary node and from arbitrary node to end node is proposed. In view of a scheduling problem of two activities with float in the CPM scheduling, we put forward Barycenter Theory and prove this theory based on the algorithm of the length of the longest path. By this theory, we know which activity should be done firstly. At last, we show our theory by an example.展开更多
Abdominal aortic aneurysm is a common vascular disease that affects elderly population.Open surgical repair is regarded as the gold standard technique for treatment of abdominal aortic aneurysm,however,endovaseular an...Abdominal aortic aneurysm is a common vascular disease that affects elderly population.Open surgical repair is regarded as the gold standard technique for treatment of abdominal aortic aneurysm,however,endovaseular aneurysm repair has rapidly expanded since its first introduction in 1990s.As a less invasive technique,endovascular aneurysm repair has been confirmed to be an effective alternative to open surgical repair,especially in patients with co-morbid conditions.Computed tomography (CT) angiography is currently the preferred imaging modality for both preoperative planning and post-operative follow-up.2D CT images are complemented by a number of 3D reconstructions which enhance the diagnostic applications of CT angiography in both planning and follow-up of endovascular repair.CT has the disadvantage of high cummulative radiation dose,of particular concern in younger patients,since patients require regular imaging follow-ups after endovascular repair,thus,exposing patients to repeated radiation exposure for life.There is a trend to change from CT to ultrasound surveillance of endovascular aneurysm repair.Medical image visualizations demonstrate excellent morphological assessment of aneurysm and stent-grafts,but fail to provide hemodynamic changes caused by the complex stent-graft device that is implanted into the aorta.This article reviews the treatment options of abdominal aortic aneurysm,various image visualization tools,and follow-up procedures with use of different modalities including both imaging and computational fluid dynamics methods.Future directions to improve treatment outcomes in the follow-up of endovascular aneurysm repair are outlined.展开更多
Recently, drones have found applicability in a variety of study fields, one of these being forestry, where an increasing interest is given to this segment of technology, especially due to the high-resolution data that...Recently, drones have found applicability in a variety of study fields, one of these being forestry, where an increasing interest is given to this segment of technology, especially due to the high-resolution data that can be collected flexibly in a short time and at a relatively low price. Also, drones have an important role in filling the gaps of common data collected using manned aircraft or satellite remote sensing, while having many advantages both in research and in various practical applications particularly in forestry as well as in land use in general. This paper aims to briefly describe the different approaches of applications of UAVs (Unmanned Aircraft Vehicles) in forestry, such as forest mapping, forest management planning, canopy height model creation or mapping forest gaps. These approaches have great potential in the near future applications and their quick implementation in a variety of situations is desirable for the sustainable management of forests.展开更多
The simplest normal form of resonant double Hopf bifurcation was studied based on Lie operator. The coefficients of the simplest normal forms of resonant double Hopf bifurcation and the nonlinear transformations in te...The simplest normal form of resonant double Hopf bifurcation was studied based on Lie operator. The coefficients of the simplest normal forms of resonant double Hopf bifurcation and the nonlinear transformations in terms of the original system coefficients were given explicitly. The nonlinear transformations were used for reducing the lower- and higher-order normal forms, and the rank of system matrix was used to determine the coefficient of normal form which could be reduced. These make the gained normal form simpler than the traditional one. A general program was compiled with Mathematica. This program can compute the simplest normal form of resonant double Hopf bifurcation and the non-resonant form up to the 7th order.展开更多
This numerical study investigates the effects of using a diluted fuel (50% natural gas and 50% N2) in an industrial furnace under several cases of conventional combustion (air with 21% O2 at 300 and 1273 K) and th...This numerical study investigates the effects of using a diluted fuel (50% natural gas and 50% N2) in an industrial furnace under several cases of conventional combustion (air with 21% O2 at 300 and 1273 K) and the highly preheated and diluted air (1273 K with 10% O2 and 90% N2) combustion (HPDAC) conditions using an in-house computer program. It was found that by applying a combined diluted fuel and oxidant instead of their uncombined and/or undiluted states, the best condition is obtained for the establishment of HPDAC's main unique features. These features are low mean and maximum gas temperature and high radiation/total heat transfer to gas and tubes; as well as more uniformity of theirs distributions which results in decrease in NOx pollutant formation and increase in furnace efficiency or energy saving. Moreover, a variety of chemical flame shape, the process fluid and tubes walls temperatures profiles, the required regenerator efficiency and finally the concentration and velocity patterns have been also qualitatively/quantitatively studied.展开更多
This paper discussed the current of works on computerisation of all problems related to mining subsidence, including the time factor,carried out in the Division of Mining Geodesy of Technical University of Silesia, Po...This paper discussed the current of works on computerisation of all problems related to mining subsidence, including the time factor,carried out in the Division of Mining Geodesy of Technical University of Silesia, Poland. First, the formulas implemented in the programs were presented. These formulas considerably increase the description accuracy of final deformations by taking into uncaved strip along extraction rib (extraction margin). They also improve the deformation description of areas located far from the extraction place. Then, the research results aiming to improving the description of deformation with time were introduced. Finally, the Windows based version of the program for the creation of mining geological opinions were presented in the form accepted by Mining Offices of Poland.展开更多
By taking cross-wind forces acting on trains into consideration, a dynamic analysis method of the cross-wind and high-speed train and slab track system was proposed on the basis of the analysis theory of spatial vibra...By taking cross-wind forces acting on trains into consideration, a dynamic analysis method of the cross-wind and high-speed train and slab track system was proposed on the basis of the analysis theory of spatial vibration of high-speed train and slab track system. The corresponding computer program was written by FORTRAN language. The dynamic responses of the high-speed train and slab track under cross-wind action were calculated. Meanwhile, the effects of the cross-wind on the dynamic responses of the system were also analyzed. The results show that the cross-wind has a significant influence on the lateral and vertical displacement responses of the car body, load reduction factor and overturning factor. For example, the maximum lateral displacement responses of the car body of the first trailer with and without cross-wind forces are 32.10 and 1.60 mm, respectively. The maximum vertical displacement responses of the car body of the first trailer with and without cross-wind forces are 6.60 and 3.29 mm, respectively. The maximum wheel load reduction factors of the first trailer with and without cross-wind forces are 0.43 and 0.22, respectively. The maximum overturning factors of the first trailer with and without cross-wind forces are 0.28 and 0.08, respectively. The cross-wind affects the derailment factor and lateral Sperling factor of the moving train to a certain extent. However, the lateral and vertical displacement responses of rails with the crnss-wind are almost the same as those without the cross-wind. The method presented and the corresponding computer program can be used to calculate the interaction between trains and track in cross-wind.展开更多
This paper presents the method for the performance calibration of AACMM (articulated arm coordinate measuring machines) according to ASME B89.4.22 Standard. The growing use of this class of measurement equipment has...This paper presents the method for the performance calibration of AACMM (articulated arm coordinate measuring machines) according to ASME B89.4.22 Standard. The growing use of this class of measurement equipment has been accompanied by an absence of authorized laboratories to provide calibration certificates for its performance. Due to ASME B89.4.22 and VD12617-9 are nowadays the unique standards in the field of AACMM verification, IK4 Tekniker has compared both of them in order to develop internal test procedures to yield reliable performance calibration results. As a result, IK4 Tekniker has been recognized by the Spanish Accreditation Body (ENAC) in the field of AACMM calibration. Internal test procedures and uncertainty evaluation analysis have been developed as well as ENAC certificated reference test equipments have been acquired to ensure a suitable AACMM calibration process.展开更多
Coverage analysis is a structural testing technique that helps to eliminate gaps in atest suite and determines when to stop testing. To compute test coverage, this letter proposes anew concept coverage about variables...Coverage analysis is a structural testing technique that helps to eliminate gaps in atest suite and determines when to stop testing. To compute test coverage, this letter proposes anew concept coverage about variables, based on program slicing. By adding powers accordingto their importance, the users can focus on the important variables to obtain higher test coverage.The letter presents methods to compute basic coverage based on program structure graphs. Inmost cases, the coverage obtained in the letter is bigger than that obtained by a traditionalmeasure, because the coverage about a variable takes only the related codes into account.展开更多
The rise of big data has led to new demands for machine learning (ML) systems to learn complex mod- els, with millions to billions of parameters, that promise adequate capacity to digest massive datasets and offer p...The rise of big data has led to new demands for machine learning (ML) systems to learn complex mod- els, with millions to billions of parameters, that promise adequate capacity to digest massive datasets and offer powerful predictive analytics (such as high-dimensional latent features, intermediate repre- sentations, and decision functions) thereupon. In order to run ML algorithms at such scales, on a distrib- uted cluster with tens to thousands of machines, it is often the case that significant engineering efforts are required-and one might fairly ask whether such engineering truly falls within the domain of ML research. Taking the view that "big" ML systems can benefit greatly from ML-rooted statistical and algo- rithmic insights-and that ML researchers should therefore not shy away from such systems design-we discuss a series of principles and strategies distilled from our recent efforts on industrial-scale ML solu- tions. These principles and strategies span a continuum from application, to engineering, and to theo- retical research and development of big ML systems and architectures, with the goal of understanding how to make them efficient, generally applicable, and supported with convergence and scaling guaran- tees. They concern four key questions that traditionally receive little attention in ML research: How can an ML program be distributed over a cluster? How can ML computation be bridged with inter-machine communication? How can such communication be performed? What should be communicated between machines? By exposing underlying statistical and algorithmic characteristics unique to ML programs but not typically seen in traditional computer programs, and by dissecting successful cases to reveal how we have harnessed these principles to design and develop both high-performance distributed ML software as well as general-purpose ML frameworks, we present opportunities for ML researchers and practitioners to further shape and enlarge the area that lies between ML and systems..展开更多
Recently, Morabito(2010) has studied the water spray phenomena in planing hulls and presented new analytical equations. However, these equations have not been used for detailed parametric studies of water spray around...Recently, Morabito(2010) has studied the water spray phenomena in planing hulls and presented new analytical equations. However, these equations have not been used for detailed parametric studies of water spray around planing hulls. In this paper, a straight forward analysis is conducted to apply these analytical equations for finding the spray geometry profile by developing a computer program based on presented computational process. The obtained results of the developed computer program are compared against existing data in the literature and favorable accuracy is achieved. Parametric studies have been conducted for different physical parameters. Positions of spray apex are computed and three dimensional profiles of spray are examined. It is concluded that spray height increases by an increase in the speed coefficient or the deadrise angle. Ultimately, a computational process is added to Savitsky's method and variations of spray apex are computed for different velocities. It is shown that vertical, lateral, and longitudinal positions of spray increase as the craft speed increases. On the other hand, two new angles are defined in top view and it is concluded that they have direct relation with the trim angle. However, they show inverse relation with the deadrise angle.展开更多
Surface strain fields of the designed compact tension(CT)specimens were investigated by digital image correlation(DIC)method.An integrative computer program was developed based on DIC algorithms to characterize the st...Surface strain fields of the designed compact tension(CT)specimens were investigated by digital image correlation(DIC)method.An integrative computer program was developed based on DIC algorithms to characterize the strain fields accurately and graphically.Strain distribution of the CT specimen was predicted by finite element method(FEM).Good agreement is observed between the surface strain fields measured by DIC and predicted by FEM,which reveals that the proposed method is practical and effective to determine the strain fields of CT specimens.Moreover,strain fields of the CT specimens with various compressive loads and notch diameters were studied by DIC.The experimental results can provide effective reference to usage of CT specimens in triaxial creep test by appropriately selecting specimen and experiment parameters.展开更多
AIM:To evaluate the safety and effectiveness of twostage vs single-stage management for concomitant gallstones and common bile duct stones.METHODS:Four databases,including PubMed,Embase,the Cochrane Central Register o...AIM:To evaluate the safety and effectiveness of twostage vs single-stage management for concomitant gallstones and common bile duct stones.METHODS:Four databases,including PubMed,Embase,the Cochrane Central Register of Controlled Trials and the Science Citation Index up to September 2011,were searched to identify all randomized controlled trials(RCTs).Data were extracted from the studies by two independent reviewers.The primary outcomes were stone clearance from the common bile duct,postoperative morbidity and mortality.The secondary outcomes were conversion to other procedures,number of procedures per patient,length of hospital stay,total operative time,hospitalization charges,patient acceptance and quality of life scores.RESULTS:Seven eligible RCTs [five trials(n = 621) comparing preoperative endoscopic retrograde cholangiopancreatography(ERCP)/endoscopic sphincterotomy(EST) + laparoscopic cholecystectomy(LC) with LC + laparoscopic common bile duct exploration(LCBDE);two trials(n = 166) comparing postoperative ERCP/EST + LC with LC + LCBDE],composed of 787 patients in total,were included in the final analysis.The metaanalysis detected no statistically significant difference between the two groups in stone clearance from the common bile duct [risk ratios(RR) =-0.10,95% confidence intervals(CI):-0.24 to 0.04,P = 0.17],postoperative morbidity(RR = 0.79,95% CI:0.58 to 1.10,P = 0.16),mortality(RR = 2.19,95% CI:0.33 to 14.67,P = 0.42),conversion to other procedures(RR = 1.21,95% CI:0.54 to 2.70,P = 0.39),length of hospital stay(MD = 0.99,95% CI:-1.59 to 3.57,P = 0.45),total operative time(MD = 12.14,95% CI:-1.83 to 26.10,P = 0.09).Two-stage(LC + ERCP/EST) management clearly required more procedures per patient than single-stage(LC + LCBDE) management.CONCLUSION:Single-stage management is equivalent to two-stage management but requires fewer procedures.However,patient's condition,operator's expertise and local resources should be taken into account in making treatment decisions.展开更多
文摘Injection of water to enhance oil production is commonplace, and improvements in understanding the process are economically important. This study examines predictive models of the injection-to-production ratio. Firstly, the error between the fitting and actual injection-production ratio is calculated with such methods as the injection-production ratio and water-oil ratio method, the material balance method, the multiple regression method, the gray theory GM (1,1) model and the back-propogation (BP) neural network method by computer applications in this paper. The relative average errors calculated are respectively 1.67%, 1.08%, 19.2%, 1.38% and 0.88%. Secondly, the reasons for the errors from different prediction methods are analyzed theoretically, indicating that the prediction precision of the BP neural network method is high, and that it has a better self-adaptability, so that it can reflect the internal relationship between the injection-production ratio and the influencing factors. Therefore, the BP neural network method is suitable to the prediction of injection-production ratio.
文摘The design of an FPGA( field programmable gate array) based programmable SONET (synchronous optical network) OC-192 10 Gbit/s PRBS (pseudo-random binary sequence) generator and a bit interleaved polarity 8 (BIP-8) error detector is presented. Implemented in a parallel feedback configuration, this tester features PRBS generation of sequences with bit lengths of 2^7 - 1,2^10- 1,2^15 - 1,2^23 - land 2^31 - 1 for up to 10 Gbit/s applications with a 10 Gbit/s optical transceiver, via the SFI-4 (OC-192 serdes-framer interface). In the OC-192 frame alignment circuit, a dichotomy search algorithm logic which performs the functions of word alignment and STM-64/OC192 de-frame speeds up the frame sync logic and reduces circuit complexity greatly. The system can be used as a low cost tester to evaluate the performance of OC-192 devices and components, taking the replacement of precious commercial PRBS testers.
基金supported in part by the NIH DA039530(to XJ)a grant from the CURE Epilepsy Foundation(to XJ)
文摘Hyperexcitability of neural network is a key neurophysiological mechanism in several neurological disorders including epilepsy, neuropathic pain, and tinnitus. Although standard paradigm of pharmacological management of them is to suppress this hyperexcitability, such as having been exemplified by the use of certain antiepileptic drugs, their frequent refractoriness to drug treatment suggests likely different pathophysiological mechanism. Because the pathogenesis in these disorders exhibits a transition from an initial activity loss after injury or sensory deprivation to subsequent hyperexcitability and paroxysmal discharges, this process can be regarded as a process of functional compensation similar to homeostatic plasticity regulation, in which a set level of activity in neural network is maintained after injury-induced activity loss through enhanced network excitability. Enhancing brain activity, such as cortical stimulation that is found to be effective in relieving symptoms of these disorders, may reduce such hyperexcitability through homeostatic plasticity mechanism. Here we review current evidence of homeostatic plasticity in the mechanism of acquired epilepsy, neuropathic pain, and tinnitus and the effects and mechanism of cortical stimulation. Establishing a role of homeostatic plasticity in these disorders may provide a theoretical basis on their pathogenesis as well as guide the development and application of therapeutic approaches through electrically or pharmacologically stimulating brain activity for treating these disorders.
文摘MapReduce is a popular program- ming model for processing large-scale datasets in a distributed environment and is a funda- mental component of current cloud comput- ing and big data applications. In this paper, a heartbeat mechanism for MapReduce Task Scheduler using Dynamic Calibration (HMTS- DC) is proposed to address the unbalanced node computation capacity problem in a het- erogeneous MapReduce environment. HMTS- DC uses two mechanisms to dynamically adapt and balance tasks assigned to each com- pute node: 1) using heartbeat to dynamically estimate the capacity of the compute nodes, and 2) using data locality of replicated data blocks to reduce data transfer between nodes. With the first mechanism, based on the heart- beats received during the early state of the job, the task scheduler can dynamically estimate the computational capacity of each node. Us- ing the second mechanism, unprocessed Tasks local to each compute node are reassigned and reserved to allow nodes with greater capacities to reserve more local tasks than their weaker counterparts. Experimental results show that HMTS-DC performs better than Hadoop and Dynamic Data Placement Strategy (DDP) in a dynamic environment. Furthermore, an en- hanced HMTS-DC (EHMTS-DC) is proposed bv incorporatin historical data. In contrastto the "slow start" property of HMTS-DC, EHMTS-DC relies on the historical computation capacity of the slave machines. The experimental results show that EHMTS-DC outperforms HMTS-DC in a dynamic environment.
基金Sponsored by the National Natural Science Foundation of China(Grant No70472075)the Project of the Jiangxi Province Natural Science Foundation(Grant No2007GZS0898)the Project of Science and Technology for the Department of Education of Jiangxi Province (Grant No2007-183)
文摘Analytic Hierarchy Process (AHP) method can be used to solve the tasks of multi-criterion decision system fields, but some complicated questions processed by AHP cannot be easily solved by means of the general method. It is because of being unsatisfied with consistency condition or judgment matrix too intricate to solve, which causes AHP invalidation. So in order to resolve this problem, AHP knowledge systems reduced with the aid of Genetic Algorithms (GA) were proposed, which directly acquired the order of AHP issue through the rule of Rough Sets Theory (RST) method, or solved the tasks reduced by RST with classical AHP method. On this condition, the compare decision system of region informatization level was solved, and the results solved were the same to those by classical AHP, which denoted that this method was more simple and reliable, besides the four rules of changing AHP system into RST Decision System.
基金Project supported by the National Natural Science Foundation of China (Nos. 10271110 10301028) and the Teaching and Research Award Program for Outstanding Young Teachers in Higher Education Institutions of MOE+2 种基金 China Project supported by the National Natural Science Foundation of China (Nos. 10271110 10301028) and the Teaching and Research Award Program for Outstanding Young Teachers in Higher Education Institutions of MOE China
文摘Parallel machine scheduling problems, which are important discrete optimization problems, may occur in many applications. For example, load balancing in network communication channel assignment, parallel processing in large-size computing, task arrangement in flexible manufacturing systems, etc., are multiprocessor scheduling problem. In the traditional parallel machine scheduling problems, it is assumed that the problems are considered in offline or online environment. But in practice, problems are often not really offline or online but somehow in-between. This means that, with respect to the online problem, some further information about the tasks is available, which allows the improvement of the performance of the best possible algorithms. Problems of this class are called semi-online ones. In this paper, the semi-online problem P2|decr|lp (p>1) is considered where jobs come in non-increasing order of their processing times and the objective is to minimize the sum of the lp norm of every machine’s load. It is shown that LS algorithm is optimal for any lp norm, which extends the results known in the literature. Furthermore, randomized lower bounds for the problems P2|online|lp and P2|decr|lp are presented.
基金Sponsored by the National Natural Science Foundation of China(Grant No.70671040)and Specialized Research Fund for the Doctoral Program of High Education(Grant No.20050079008).
文摘In a CPM network, the longest path problem is one of the most important subjects. According to the intrinsic principle of CPM network, the length of the paths between arbitrary two nodes is presented. Furthermore, the length of the longest path from start node to arbitrary node and from arbitrary node to end node is proposed. In view of a scheduling problem of two activities with float in the CPM scheduling, we put forward Barycenter Theory and prove this theory based on the algorithm of the length of the longest path. By this theory, we know which activity should be done firstly. At last, we show our theory by an example.
文摘Abdominal aortic aneurysm is a common vascular disease that affects elderly population.Open surgical repair is regarded as the gold standard technique for treatment of abdominal aortic aneurysm,however,endovaseular aneurysm repair has rapidly expanded since its first introduction in 1990s.As a less invasive technique,endovascular aneurysm repair has been confirmed to be an effective alternative to open surgical repair,especially in patients with co-morbid conditions.Computed tomography (CT) angiography is currently the preferred imaging modality for both preoperative planning and post-operative follow-up.2D CT images are complemented by a number of 3D reconstructions which enhance the diagnostic applications of CT angiography in both planning and follow-up of endovascular repair.CT has the disadvantage of high cummulative radiation dose,of particular concern in younger patients,since patients require regular imaging follow-ups after endovascular repair,thus,exposing patients to repeated radiation exposure for life.There is a trend to change from CT to ultrasound surveillance of endovascular aneurysm repair.Medical image visualizations demonstrate excellent morphological assessment of aneurysm and stent-grafts,but fail to provide hemodynamic changes caused by the complex stent-graft device that is implanted into the aorta.This article reviews the treatment options of abdominal aortic aneurysm,various image visualization tools,and follow-up procedures with use of different modalities including both imaging and computational fluid dynamics methods.Future directions to improve treatment outcomes in the follow-up of endovascular aneurysm repair are outlined.
文摘Recently, drones have found applicability in a variety of study fields, one of these being forestry, where an increasing interest is given to this segment of technology, especially due to the high-resolution data that can be collected flexibly in a short time and at a relatively low price. Also, drones have an important role in filling the gaps of common data collected using manned aircraft or satellite remote sensing, while having many advantages both in research and in various practical applications particularly in forestry as well as in land use in general. This paper aims to briefly describe the different approaches of applications of UAVs (Unmanned Aircraft Vehicles) in forestry, such as forest mapping, forest management planning, canopy height model creation or mapping forest gaps. These approaches have great potential in the near future applications and their quick implementation in a variety of situations is desirable for the sustainable management of forests.
基金Supported by National Natural Science Foundation of China(No. 10372068).
文摘The simplest normal form of resonant double Hopf bifurcation was studied based on Lie operator. The coefficients of the simplest normal forms of resonant double Hopf bifurcation and the nonlinear transformations in terms of the original system coefficients were given explicitly. The nonlinear transformations were used for reducing the lower- and higher-order normal forms, and the rank of system matrix was used to determine the coefficient of normal form which could be reduced. These make the gained normal form simpler than the traditional one. A general program was compiled with Mathematica. This program can compute the simplest normal form of resonant double Hopf bifurcation and the non-resonant form up to the 7th order.
基金Supported by the National Iranian Oil Company (NIOC)
文摘This numerical study investigates the effects of using a diluted fuel (50% natural gas and 50% N2) in an industrial furnace under several cases of conventional combustion (air with 21% O2 at 300 and 1273 K) and the highly preheated and diluted air (1273 K with 10% O2 and 90% N2) combustion (HPDAC) conditions using an in-house computer program. It was found that by applying a combined diluted fuel and oxidant instead of their uncombined and/or undiluted states, the best condition is obtained for the establishment of HPDAC's main unique features. These features are low mean and maximum gas temperature and high radiation/total heat transfer to gas and tubes; as well as more uniformity of theirs distributions which results in decrease in NOx pollutant formation and increase in furnace efficiency or energy saving. Moreover, a variety of chemical flame shape, the process fluid and tubes walls temperatures profiles, the required regenerator efficiency and finally the concentration and velocity patterns have been also qualitatively/quantitatively studied.
文摘This paper discussed the current of works on computerisation of all problems related to mining subsidence, including the time factor,carried out in the Division of Mining Geodesy of Technical University of Silesia, Poland. First, the formulas implemented in the programs were presented. These formulas considerably increase the description accuracy of final deformations by taking into uncaved strip along extraction rib (extraction margin). They also improve the deformation description of areas located far from the extraction place. Then, the research results aiming to improving the description of deformation with time were introduced. Finally, the Windows based version of the program for the creation of mining geological opinions were presented in the form accepted by Mining Offices of Poland.
基金Project (2007CB714706) supported by the Major State Basic Research and Development Program of ChinaProject (50678176) supported by the National Natural Science Foundation of ChinaProject (NCET-07-0866) supported by the New Century Excellent Talents in University
文摘By taking cross-wind forces acting on trains into consideration, a dynamic analysis method of the cross-wind and high-speed train and slab track system was proposed on the basis of the analysis theory of spatial vibration of high-speed train and slab track system. The corresponding computer program was written by FORTRAN language. The dynamic responses of the high-speed train and slab track under cross-wind action were calculated. Meanwhile, the effects of the cross-wind on the dynamic responses of the system were also analyzed. The results show that the cross-wind has a significant influence on the lateral and vertical displacement responses of the car body, load reduction factor and overturning factor. For example, the maximum lateral displacement responses of the car body of the first trailer with and without cross-wind forces are 32.10 and 1.60 mm, respectively. The maximum vertical displacement responses of the car body of the first trailer with and without cross-wind forces are 6.60 and 3.29 mm, respectively. The maximum wheel load reduction factors of the first trailer with and without cross-wind forces are 0.43 and 0.22, respectively. The maximum overturning factors of the first trailer with and without cross-wind forces are 0.28 and 0.08, respectively. The cross-wind affects the derailment factor and lateral Sperling factor of the moving train to a certain extent. However, the lateral and vertical displacement responses of rails with the crnss-wind are almost the same as those without the cross-wind. The method presented and the corresponding computer program can be used to calculate the interaction between trains and track in cross-wind.
文摘This paper presents the method for the performance calibration of AACMM (articulated arm coordinate measuring machines) according to ASME B89.4.22 Standard. The growing use of this class of measurement equipment has been accompanied by an absence of authorized laboratories to provide calibration certificates for its performance. Due to ASME B89.4.22 and VD12617-9 are nowadays the unique standards in the field of AACMM verification, IK4 Tekniker has compared both of them in order to develop internal test procedures to yield reliable performance calibration results. As a result, IK4 Tekniker has been recognized by the Spanish Accreditation Body (ENAC) in the field of AACMM calibration. Internal test procedures and uncertainty evaluation analysis have been developed as well as ENAC certificated reference test equipments have been acquired to ensure a suitable AACMM calibration process.
基金Supported in part by the National Natural Science Foundation of China(60073012),National Grand Fundamental Research 973 Program of China(G1999032701),National Research Foundation for the Doctoral Program of Higher Education of China,Natural Science Found
文摘Coverage analysis is a structural testing technique that helps to eliminate gaps in atest suite and determines when to stop testing. To compute test coverage, this letter proposes anew concept coverage about variables, based on program slicing. By adding powers accordingto their importance, the users can focus on the important variables to obtain higher test coverage.The letter presents methods to compute basic coverage based on program structure graphs. Inmost cases, the coverage obtained in the letter is bigger than that obtained by a traditionalmeasure, because the coverage about a variable takes only the related codes into account.
文摘The rise of big data has led to new demands for machine learning (ML) systems to learn complex mod- els, with millions to billions of parameters, that promise adequate capacity to digest massive datasets and offer powerful predictive analytics (such as high-dimensional latent features, intermediate repre- sentations, and decision functions) thereupon. In order to run ML algorithms at such scales, on a distrib- uted cluster with tens to thousands of machines, it is often the case that significant engineering efforts are required-and one might fairly ask whether such engineering truly falls within the domain of ML research. Taking the view that "big" ML systems can benefit greatly from ML-rooted statistical and algo- rithmic insights-and that ML researchers should therefore not shy away from such systems design-we discuss a series of principles and strategies distilled from our recent efforts on industrial-scale ML solu- tions. These principles and strategies span a continuum from application, to engineering, and to theo- retical research and development of big ML systems and architectures, with the goal of understanding how to make them efficient, generally applicable, and supported with convergence and scaling guaran- tees. They concern four key questions that traditionally receive little attention in ML research: How can an ML program be distributed over a cluster? How can ML computation be bridged with inter-machine communication? How can such communication be performed? What should be communicated between machines? By exposing underlying statistical and algorithmic characteristics unique to ML programs but not typically seen in traditional computer programs, and by dissecting successful cases to reveal how we have harnessed these principles to design and develop both high-performance distributed ML software as well as general-purpose ML frameworks, we present opportunities for ML researchers and practitioners to further shape and enlarge the area that lies between ML and systems..
文摘Recently, Morabito(2010) has studied the water spray phenomena in planing hulls and presented new analytical equations. However, these equations have not been used for detailed parametric studies of water spray around planing hulls. In this paper, a straight forward analysis is conducted to apply these analytical equations for finding the spray geometry profile by developing a computer program based on presented computational process. The obtained results of the developed computer program are compared against existing data in the literature and favorable accuracy is achieved. Parametric studies have been conducted for different physical parameters. Positions of spray apex are computed and three dimensional profiles of spray are examined. It is concluded that spray height increases by an increase in the speed coefficient or the deadrise angle. Ultimately, a computational process is added to Savitsky's method and variations of spray apex are computed for different velocities. It is shown that vertical, lateral, and longitudinal positions of spray increase as the craft speed increases. On the other hand, two new angles are defined in top view and it is concluded that they have direct relation with the trim angle. However, they show inverse relation with the deadrise angle.
基金Projects(51575347,51405297,51204107)supported by the National Natural Science Foundation of China
文摘Surface strain fields of the designed compact tension(CT)specimens were investigated by digital image correlation(DIC)method.An integrative computer program was developed based on DIC algorithms to characterize the strain fields accurately and graphically.Strain distribution of the CT specimen was predicted by finite element method(FEM).Good agreement is observed between the surface strain fields measured by DIC and predicted by FEM,which reveals that the proposed method is practical and effective to determine the strain fields of CT specimens.Moreover,strain fields of the CT specimens with various compressive loads and notch diameters were studied by DIC.The experimental results can provide effective reference to usage of CT specimens in triaxial creep test by appropriately selecting specimen and experiment parameters.
文摘AIM:To evaluate the safety and effectiveness of twostage vs single-stage management for concomitant gallstones and common bile duct stones.METHODS:Four databases,including PubMed,Embase,the Cochrane Central Register of Controlled Trials and the Science Citation Index up to September 2011,were searched to identify all randomized controlled trials(RCTs).Data were extracted from the studies by two independent reviewers.The primary outcomes were stone clearance from the common bile duct,postoperative morbidity and mortality.The secondary outcomes were conversion to other procedures,number of procedures per patient,length of hospital stay,total operative time,hospitalization charges,patient acceptance and quality of life scores.RESULTS:Seven eligible RCTs [five trials(n = 621) comparing preoperative endoscopic retrograde cholangiopancreatography(ERCP)/endoscopic sphincterotomy(EST) + laparoscopic cholecystectomy(LC) with LC + laparoscopic common bile duct exploration(LCBDE);two trials(n = 166) comparing postoperative ERCP/EST + LC with LC + LCBDE],composed of 787 patients in total,were included in the final analysis.The metaanalysis detected no statistically significant difference between the two groups in stone clearance from the common bile duct [risk ratios(RR) =-0.10,95% confidence intervals(CI):-0.24 to 0.04,P = 0.17],postoperative morbidity(RR = 0.79,95% CI:0.58 to 1.10,P = 0.16),mortality(RR = 2.19,95% CI:0.33 to 14.67,P = 0.42),conversion to other procedures(RR = 1.21,95% CI:0.54 to 2.70,P = 0.39),length of hospital stay(MD = 0.99,95% CI:-1.59 to 3.57,P = 0.45),total operative time(MD = 12.14,95% CI:-1.83 to 26.10,P = 0.09).Two-stage(LC + ERCP/EST) management clearly required more procedures per patient than single-stage(LC + LCBDE) management.CONCLUSION:Single-stage management is equivalent to two-stage management but requires fewer procedures.However,patient's condition,operator's expertise and local resources should be taken into account in making treatment decisions.