A simulation model was proposed to investigate the relationship between train delays and passenger delays and to predict the dynamic passenger distribution in a large-scale rail transit network. It was assumed that th...A simulation model was proposed to investigate the relationship between train delays and passenger delays and to predict the dynamic passenger distribution in a large-scale rail transit network. It was assumed that the time varying original-destination demand and passenger path choice probability were given. Passengers were assumed not to change their destinations and travel paths after delay occurs. CapaciW constraints of train and queue rules of alighting and boarding were taken into account. By using the time-driven simulation, the states of passengers, trains and other facilities in the network were updated every time step. The proposed methodology was also tested in a real network, for demonstration. The results reveal that short train delay does not necessarily result in passenger delays, while, on the contrary, some passengers may get benefits from the short delay. However, large initial train delay may result in not only knock-on train and passenger delays along the same line, but also the passenger delays across the entire rail transit network.展开更多
A batch-to-batch optimal iterative learning control (ILC) strategy for the tracking control of product quality in batch processes is presented. The linear time-varying perturbation (LTVP) model is built for produc...A batch-to-batch optimal iterative learning control (ILC) strategy for the tracking control of product quality in batch processes is presented. The linear time-varying perturbation (LTVP) model is built for product quality around the nominal trajectories. To address problems of model-plant mismatches, model prediction errors in the previous batch run are added to the model predictions for the current batch run. Then tracking error transition models can be built, and the ILC law with direct error feedback is explicitly obtained, A rigorous theorem is proposed, to prove the convergence of tracking error under ILC, The proposed methodology is illustrated on a typical batch reactor and the results show that the performance of trajectory tracking is gradually improved by the ILC.展开更多
This paper reviews the requirements for Software Defi ned Radio (SDR) systems for high-speed wireless applications and compares how well the different technology choices available-from ASICs, FPGAs to digital signal p...This paper reviews the requirements for Software Defi ned Radio (SDR) systems for high-speed wireless applications and compares how well the different technology choices available-from ASICs, FPGAs to digital signal processors (DSPs) and general purpose processors (GPPs) - meet them.展开更多
The problem of transmission power control in a rate-aware way is investigated to improve the throughput of wireless ad hoc network. The behavior of basic IEEE 802.11 DCF is approximated by the p-persistent CSMA throug...The problem of transmission power control in a rate-aware way is investigated to improve the throughput of wireless ad hoc network. The behavior of basic IEEE 802.11 DCF is approximated by the p-persistent CSMA through a Markov chain model. The throughput model takes hidden terminals, muhi-hop flow and concurrent interference into account. Numerical results show that the optimal transmission power derived from this model could balance the tradeoff between spatial reuse and data rate and hence yield maximum throughput.展开更多
With the growth of the internet and open software, there are additional software developers available from the open community that can participate in the development of software application systems. Aiming to leverage...With the growth of the internet and open software, there are additional software developers available from the open community that can participate in the development of software application systems. Aiming to leverage these resources, a new development model, CFI (call for implementation), is proposed. The basic idea of CFI is to publish some part of a software project to the open community, whole or part, in certain phases of the software development lifecycle to call for implementation. This paper discusses the basic concept and method for a software development process in CFI mode. Two different modes of CFI with different granularities are analyzed. And one of the CFI modes, fine-granularity-CFI mode, is thoroughly discussed including the main methods and basic steps. To verify the ideas a pilot project, an online store system, is built up with the CFI development process. The online store system takes the traditional Model-View-Control architecture and some common technologies such as Struts, Hibernate, Spring are used. The result shows that this new kind of software development mode is feasible though there are many problems that are still requiring further study.展开更多
This paper proposes a novel voice conversion method by frequency warping. The frequency warping function is generated based on mapping formants of the source speaker and the target speaker. In addition to frequency wa...This paper proposes a novel voice conversion method by frequency warping. The frequency warping function is generated based on mapping formants of the source speaker and the target speaker. In addition to frequency warping, fundamental frequency adjustment, spectral envelope equalization, breathiness addition, and duration modification are also used to improve the similarity to the target speaker. The proposed voice conversion method needs only a very small amount of training data for generating the warping function, thereby greatly facilitating its application. Systems based on the proposed method were used for the 2007 TC-STAR intra-lingual voice conversion evaluation for English and Spanish and a cross-lingual voice conversion evaluation for Spanish. The evaluation results show that the proposed method can achieve a much better quality of converted speech than other methods as well as a good balance between quality and similarity. The IBM1 system was ranked No. 1 for English evaluation and No. 2 for Spanish evaluation. Evaluation results also show that the proposed method is a convenient and competitive method for crosslingual voice conversion tasks.展开更多
Efficient support for querying large-scale resource description framework (RDF) triples plays an important role in semantic web data management. This paper presents an efficient RDF query engine to evaluate SPARQL q...Efficient support for querying large-scale resource description framework (RDF) triples plays an important role in semantic web data management. This paper presents an efficient RDF query engine to evaluate SPARQL queries, where the inverted index structure is employed for indexing the RDF triples. A set of operators on the inverted index was developed for query optimization and evaluation. Then a main-tree-shaped optimization algorithm was developed that transforms a SPARQL query graph into the op-timal query plan by effectively reducing the search space to determine the optimal joining order. The opti-mization collects a set of RDF statistics for estimating the execution cost of the query plan. Finally the opti-mal query plan is evaluated using the defined operators for answering the given SPARQL query. Extensive tests were conducted on both synthetic and real datasets containing up to 100 million triples to evaluate this approach with the results showing that this approach can answer most queries within 1 s and is extremely efficient and scalable in comparison with previous best state-of-the-art RDF stores.展开更多
An optimal iterative learning control (ILC) strategy of improving endpoint products in semi-batch processes is presented by combining a neural network model. Control affine feed-forward neural network (CAFNN) is p...An optimal iterative learning control (ILC) strategy of improving endpoint products in semi-batch processes is presented by combining a neural network model. Control affine feed-forward neural network (CAFNN) is proposed to build a model of semi-batch process. The main advantage of CAFNN is to obtain analytically its gradient of endpoint products with respect to input. Therefore, an optimal ILC law with direct error feedback is obtained explicitly, and the convergence of tracking error can be analyzed theoretically. It has been proved that the tracking errors may converge to small values. The proposed modeling and control strategy is illustrated on a simulated isothermal semi-batch reactor, and the results show that the endpoint products can be improved gradually from batch to batch.展开更多
Most types of Software-Defined Networking (SDN) architectures employ reactive rule dispatching to enhance real-time network control. The rule dispatcher, as one of the key components of the network controller, gener...Most types of Software-Defined Networking (SDN) architectures employ reactive rule dispatching to enhance real-time network control. The rule dispatcher, as one of the key components of the network controller, generates and dispatches the cache rules with response for the packet-in messages from the forwarding devices. It is important not only for ensuring semantic integrity between the control plane and the data plane, but also for preserving the performance and efficiency of the forwarding devices. In theory, generating the optimal cache rules on demands is a knotty problem due to its high theoretical complexity. In practice, however, the characteristics lying in real-life traffic and rule sets demonstrate that temporal and spacial localities can be leveraged by the rule dispatcher to significantly reduce computational overhead. In this paper, we take a deep-dive into the reactive rule dispatching problem through modeling and complexity analysis, and then we propose a set of algorithms named Hierarchy-Based Dispatching (HBD), which exploits the nesting hierarchy of rules to simplify the theoretical model of the problem, and trade the strict coverage optimality off for a more practical but still superior rule generation result. Experimental result shows that HBD achieves performance gain in terms of rule cache capability and rule storage efficiency against the existing approaches.展开更多
Graphics processing units (GPU) have taken an important role in the general purpose computing market in recent years. At present, the common approach to programming GPU units is to write CPU specific code with low l...Graphics processing units (GPU) have taken an important role in the general purpose computing market in recent years. At present, the common approach to programming GPU units is to write CPU specific code with low level GPU APIs such as CUDA. Although this approach can achieve good performance, it creates serious portability issues as programmers are required to write a specific version of the code for each potential target architecture. This results in high development and maintenance costs. We believe it is desirable to have a programming model which provides source code portability between CPUs and GPUs, as well as different GPUs. This would allow programmers to write one version of the code, which can be compiled and executed on either CPUs or GPUs efficiently without modification. In this paper, we propose MapCG, a MapReduce framework to provide source code level portability between CPUs and GPUs. In contrast to other approaches such as OpenCL, our framework, based on MapReduce, provides a high level programming model and makes programming much easier. We describe the design of MapCG, including the MapReduce-style high-level programming framework and the runtime system on the CPU and GPU. A prototype of the MapCG runtime, supporting multi-core CPUs and NVIDIA GPUs, was implemented. Our experimental results show that this implementation can execute the same source code efficiently on multi-core CPU platforms and GPUs, achieving an average speedup of 1.6-2.5x over previous implementations of MapReduce on eight commonly used applications.展开更多
基金Project(51008229)supported by the National Natural Science Foundation of ChinaProject supported by Key Laboratory of Road and Traffic Engineering of Tongji University,China
文摘A simulation model was proposed to investigate the relationship between train delays and passenger delays and to predict the dynamic passenger distribution in a large-scale rail transit network. It was assumed that the time varying original-destination demand and passenger path choice probability were given. Passengers were assumed not to change their destinations and travel paths after delay occurs. CapaciW constraints of train and queue rules of alighting and boarding were taken into account. By using the time-driven simulation, the states of passengers, trains and other facilities in the network were updated every time step. The proposed methodology was also tested in a real network, for demonstration. The results reveal that short train delay does not necessarily result in passenger delays, while, on the contrary, some passengers may get benefits from the short delay. However, large initial train delay may result in not only knock-on train and passenger delays along the same line, but also the passenger delays across the entire rail transit network.
基金Supported by the National Natural Science Foundation of China (60404012, 60674064), UK EPSRC (GR/N13319 and GR/R10875), the National High Technology Research and Development Program of China (2007AA04Z193), New Star of Science and Technology of Beijing City (2006A62), and IBM China Research Lab 2007 UR-Program.
文摘A batch-to-batch optimal iterative learning control (ILC) strategy for the tracking control of product quality in batch processes is presented. The linear time-varying perturbation (LTVP) model is built for product quality around the nominal trajectories. To address problems of model-plant mismatches, model prediction errors in the previous batch run are added to the model predictions for the current batch run. Then tracking error transition models can be built, and the ILC law with direct error feedback is explicitly obtained, A rigorous theorem is proposed, to prove the convergence of tracking error under ILC, The proposed methodology is illustrated on a typical batch reactor and the results show that the performance of trajectory tracking is gradually improved by the ILC.
文摘This paper reviews the requirements for Software Defi ned Radio (SDR) systems for high-speed wireless applications and compares how well the different technology choices available-from ASICs, FPGAs to digital signal processors (DSPs) and general purpose processors (GPPs) - meet them.
基金the National High Technology Research and Development Programme of China(No.2004AA104280.2006AA01Z172)
文摘The problem of transmission power control in a rate-aware way is investigated to improve the throughput of wireless ad hoc network. The behavior of basic IEEE 802.11 DCF is approximated by the p-persistent CSMA through a Markov chain model. The throughput model takes hidden terminals, muhi-hop flow and concurrent interference into account. Numerical results show that the optimal transmission power derived from this model could balance the tradeoff between spatial reuse and data rate and hence yield maximum throughput.
文摘With the growth of the internet and open software, there are additional software developers available from the open community that can participate in the development of software application systems. Aiming to leverage these resources, a new development model, CFI (call for implementation), is proposed. The basic idea of CFI is to publish some part of a software project to the open community, whole or part, in certain phases of the software development lifecycle to call for implementation. This paper discusses the basic concept and method for a software development process in CFI mode. Two different modes of CFI with different granularities are analyzed. And one of the CFI modes, fine-granularity-CFI mode, is thoroughly discussed including the main methods and basic steps. To verify the ideas a pilot project, an online store system, is built up with the CFI development process. The online store system takes the traditional Model-View-Control architecture and some common technologies such as Struts, Hibernate, Spring are used. The result shows that this new kind of software development mode is feasible though there are many problems that are still requiring further study.
文摘This paper proposes a novel voice conversion method by frequency warping. The frequency warping function is generated based on mapping formants of the source speaker and the target speaker. In addition to frequency warping, fundamental frequency adjustment, spectral envelope equalization, breathiness addition, and duration modification are also used to improve the similarity to the target speaker. The proposed voice conversion method needs only a very small amount of training data for generating the warping function, thereby greatly facilitating its application. Systems based on the proposed method were used for the 2007 TC-STAR intra-lingual voice conversion evaluation for English and Spanish and a cross-lingual voice conversion evaluation for Spanish. The evaluation results show that the proposed method can achieve a much better quality of converted speech than other methods as well as a good balance between quality and similarity. The IBM1 system was ranked No. 1 for English evaluation and No. 2 for Spanish evaluation. Evaluation results also show that the proposed method is a convenient and competitive method for crosslingual voice conversion tasks.
基金Supported by the Shanghai Jiao Tong University and IBM CRL Joint Project
文摘Efficient support for querying large-scale resource description framework (RDF) triples plays an important role in semantic web data management. This paper presents an efficient RDF query engine to evaluate SPARQL queries, where the inverted index structure is employed for indexing the RDF triples. A set of operators on the inverted index was developed for query optimization and evaluation. Then a main-tree-shaped optimization algorithm was developed that transforms a SPARQL query graph into the op-timal query plan by effectively reducing the search space to determine the optimal joining order. The opti-mization collects a set of RDF statistics for estimating the execution cost of the query plan. Finally the opti-mal query plan is evaluated using the defined operators for answering the given SPARQL query. Extensive tests were conducted on both synthetic and real datasets containing up to 100 million triples to evaluate this approach with the results showing that this approach can answer most queries within 1 s and is extremely efficient and scalable in comparison with previous best state-of-the-art RDF stores.
基金Supported by the National Natural Science Foundation of China (Grant Nos. 60404012, 60874049)the National High-Tech Research & Development Program of China (Grant No. 2007AA041402)+1 种基金the New Star of Science and Technology of Beijing City (Grant No. 2006A62)the IBM China Research Lab 2008 UR-Program
文摘An optimal iterative learning control (ILC) strategy of improving endpoint products in semi-batch processes is presented by combining a neural network model. Control affine feed-forward neural network (CAFNN) is proposed to build a model of semi-batch process. The main advantage of CAFNN is to obtain analytically its gradient of endpoint products with respect to input. Therefore, an optimal ILC law with direct error feedback is obtained explicitly, and the convergence of tracking error can be analyzed theoretically. It has been proved that the tracking errors may converge to small values. The proposed modeling and control strategy is illustrated on a simulated isothermal semi-batch reactor, and the results show that the endpoint products can be improved gradually from batch to batch.
文摘Most types of Software-Defined Networking (SDN) architectures employ reactive rule dispatching to enhance real-time network control. The rule dispatcher, as one of the key components of the network controller, generates and dispatches the cache rules with response for the packet-in messages from the forwarding devices. It is important not only for ensuring semantic integrity between the control plane and the data plane, but also for preserving the performance and efficiency of the forwarding devices. In theory, generating the optimal cache rules on demands is a knotty problem due to its high theoretical complexity. In practice, however, the characteristics lying in real-life traffic and rule sets demonstrate that temporal and spacial localities can be leveraged by the rule dispatcher to significantly reduce computational overhead. In this paper, we take a deep-dive into the reactive rule dispatching problem through modeling and complexity analysis, and then we propose a set of algorithms named Hierarchy-Based Dispatching (HBD), which exploits the nesting hierarchy of rules to simplify the theoretical model of the problem, and trade the strict coverage optimality off for a more practical but still superior rule generation result. Experimental result shows that HBD achieves performance gain in terms of rule cache capability and rule storage efficiency against the existing approaches.
基金supported by the National Natural Science Foundation of China under Grant No. 60973143the National High Technology Research and Development 863 Program of China under Grant No. 2008AA01A201the National Basic Research 973 Program of China under Grant No. 2007CB310900
文摘Graphics processing units (GPU) have taken an important role in the general purpose computing market in recent years. At present, the common approach to programming GPU units is to write CPU specific code with low level GPU APIs such as CUDA. Although this approach can achieve good performance, it creates serious portability issues as programmers are required to write a specific version of the code for each potential target architecture. This results in high development and maintenance costs. We believe it is desirable to have a programming model which provides source code portability between CPUs and GPUs, as well as different GPUs. This would allow programmers to write one version of the code, which can be compiled and executed on either CPUs or GPUs efficiently without modification. In this paper, we propose MapCG, a MapReduce framework to provide source code level portability between CPUs and GPUs. In contrast to other approaches such as OpenCL, our framework, based on MapReduce, provides a high level programming model and makes programming much easier. We describe the design of MapCG, including the MapReduce-style high-level programming framework and the runtime system on the CPU and GPU. A prototype of the MapCG runtime, supporting multi-core CPUs and NVIDIA GPUs, was implemented. Our experimental results show that this implementation can execute the same source code efficiently on multi-core CPU platforms and GPUs, achieving an average speedup of 1.6-2.5x over previous implementations of MapReduce on eight commonly used applications.