The paper discusses the statistical inference problem of the compound Poisson vector process(CPVP)in the domain of attraction of normal law but with infinite covariance matrix.The empirical likelihood(EL)method to con...The paper discusses the statistical inference problem of the compound Poisson vector process(CPVP)in the domain of attraction of normal law but with infinite covariance matrix.The empirical likelihood(EL)method to construct confidence regions for the mean vector has been proposed.It is a generalization from the finite second-order moments to the infinite second-order moments in the domain of attraction of normal law.The log-empirical likelihood ratio statistic for the average number of the CPVP converges to F distribution in distribution when the population is in the domain of attraction of normal law but has infinite covariance matrix.Some simulation results are proposed to illustrate the method of the paper.展开更多
An iterative (run-to-run) optimization method was presented for batch processes under input constraints. Generally it is very difficult to acquire an accurate mechanistic model for a batch process.Because support vect...An iterative (run-to-run) optimization method was presented for batch processes under input constraints. Generally it is very difficult to acquire an accurate mechanistic model for a batch process.Because support vector machine is powerful for the problems characterized by small samples,nonlinearity, high dimension and local minima, support vector regression models were developed for the end-point optimization of batch processes. Since there is no analytical way to find the optimal trajectory, an iterative method was used to exploit the repetitive nature of batch processes to determine the optimal operating policy. The optimization algorithm is proved convergent. The numerical simulation shows that the method can improve the process performance through iterations.展开更多
A new fault-diagnosis method to be used in batch processes based on multi-phase regression is presented to overcome the difficulty arising in the processes due to non-uniform sample data in each phase.Support vector m...A new fault-diagnosis method to be used in batch processes based on multi-phase regression is presented to overcome the difficulty arising in the processes due to non-uniform sample data in each phase.Support vector machine is first used for phase identification,and for each phase,improved artificial immune network is developed to analyze and recognize fault patterns.A new cell elimination role is proposed to enhance the incremental clustering capability of the immune network.The proposed method has been applied to glutamic acid fermentation,comparison results have indicated that the proposed approach can better classify fault samples and yield higher diagnosis precision.展开更多
With the development of automation and informatization in the steelmaking industry,the human brain gradually fails to cope with an increasing amount of data generated during the steelmaking process.Machine learning te...With the development of automation and informatization in the steelmaking industry,the human brain gradually fails to cope with an increasing amount of data generated during the steelmaking process.Machine learning technology provides a new method other than production experience and metallurgical principles in dealing with large amounts of data.The application of machine learning in the steelmaking process has become a research hotspot in recent years.This paper provides an overview of the applications of machine learning in the steelmaking process modeling involving hot metal pretreatment,primary steelmaking,secondary refining,and some other aspects.The three most frequently used machine learning algorithms in steelmaking process modeling are the artificial neural network,support vector machine,and case-based reasoning,demonstrating proportions of 56%,14%,and 10%,respectively.Collected data in the steelmaking plants are frequently faulty.Thus,data processing,especially data cleaning,is crucially important to the performance of machine learning models.The detection of variable importance can be used to optimize the process parameters and guide production.Machine learning is used in hot metal pretreatment modeling mainly for endpoint S content prediction.The predictions of the endpoints of element compositions and the process parameters are widely investigated in primary steelmaking.Machine learning is used in secondary refining modeling mainly for ladle furnaces,Ruhrstahl–Heraeus,vacuum degassing,argon oxygen decarburization,and vacuum oxygen decarburization processes.Further development of machine learning in the steelmaking process modeling can be realized through additional efforts in the construction of the data platform,the industrial transformation of the research achievements to the practical steelmaking process,and the improvement of the universality of the machine learning models.展开更多
A novel method for developing a reliable data driven soft sensor to improve the prediction accuracy of sulfur content in hydrodesulfurization(HDS) process was proposed. Therefore, an integrated approach using support ...A novel method for developing a reliable data driven soft sensor to improve the prediction accuracy of sulfur content in hydrodesulfurization(HDS) process was proposed. Therefore, an integrated approach using support vector regression(SVR) based on wavelet transform(WT) and principal component analysis(PCA) was used. Experimental data from the HDS setup were employed to validate the proposed model. The results reveal that the integrated WT-PCA with SVR model was able to increase the prediction accuracy of SVR model. Implementation of the proposed model delivers the best satisfactory predicting performance(EAARE=0.058 and R2=0.97) in comparison with SVR. The obtained results indicate that the proposed model is more reliable and more precise than the multiple linear regression(MLR), SVR and PCA-SVR.展开更多
Methanol to olefin(MTO)technology provides the opportunity to produce olefins from nonpetroleum sources such as coal,biomass and natural gas.More than 20 commercial MTO plants have been put into operation.Till now,con...Methanol to olefin(MTO)technology provides the opportunity to produce olefins from nonpetroleum sources such as coal,biomass and natural gas.More than 20 commercial MTO plants have been put into operation.Till now,contributions on optimal operation of industrial MTO plants from a process systems engineering perspective are rare.Based on relevance vector machine(RVM),a data-driven framework for optimal operation of the industrial MTO process is established to fully utilize the plentiful industrial data sets.RVM correlates the yield distribution prediction of main products and the operation conditions.These correlations then serve as the constraints for the multi-objective optimization model to pursue the optimal operation of the plant.Nondominated sorting genetic algorithmⅡis used to solve the optimization problem.Comprehensive tests demonstrate that the ethylene yield is effectively improved based on the proposed framework.Since RVM does provide the distribution prediction instead of point estimation,the established model is expected to provide guidance for actual production operations under uncertainty.展开更多
Big health data collection and storing for further analysis is a challenging task because this knowledge is big and has many features.Several cloud-based IoT health providers have been described in the literature prev...Big health data collection and storing for further analysis is a challenging task because this knowledge is big and has many features.Several cloud-based IoT health providers have been described in the literature previously.Furthermore,there are a number of issues related to time consumed and overall network performance when it comes to big data information.In the existing method,less performed optimization algorithms were used for optimizing the data.In the proposed method,the Chaotic Cuckoo Optimization algorithm was used for feature selection,and Convolutional Support Vector Machine(CSVM)was used.The research presents a method for analyzing healthcare information that uses in future prediction.The major goal is to take a variety of data while improving efficiency and minimizing process time.The suggested method employs a hybrid method that is divided into two stages.In the first stage,it reduces the features by using the Chaotic Cuckoo Optimization algorithm with Levy flight,opposition-based learning,and distributor operator.In the second stage,CSVM is used which combines the benefits of convolutional neural network(CNN)and SVM.The CSVM modifies CNN’s convolution product to learn hidden deep inside data sources.For improved economic flexibility,greater protection,greater analytics with confidentiality,and lower operating cost,the suggested approach is built on fog computing.Overall results of the experiments show that the suggested method can minimize the number of features in the datasets,enhances the accuracy by 82%,and decrease the time of the process.展开更多
On-line monitoring and fault diagnosis of chemical process is extremely important for operation safety and product quality. Principal component analysis (PCA) has been widely used in multivariate statistical process m...On-line monitoring and fault diagnosis of chemical process is extremely important for operation safety and product quality. Principal component analysis (PCA) has been widely used in multivariate statistical process monitoring for its ability to reduce processes dimensions. PCA and other statistical techniques, however, have difficulties in differentiating faults correctly in complex chemical process. Support vector machine (SVM) is a novel approach based on statistical learning theory, which has emerged for feature identification and classification. In this paper, an integrated method is applied for process monitoring and fault diagnosis, which combines PCA for fault feature extraction and multiple SVMs for identification of different fault sources. This approach is verified and illustrated on the Tennessee Eastman benchmark process as a case study. Results show that the proposed PCA-SVMs method has good diagnosis capability and overall diagnosis correctness rate.展开更多
Complex industry processes often need multiple operation modes to meet the change of production conditions. In the same mode,there are discrete samples belonging to this mode. Therefore,it is important to consider the...Complex industry processes often need multiple operation modes to meet the change of production conditions. In the same mode,there are discrete samples belonging to this mode. Therefore,it is important to consider the samples which are sparse in the mode.To solve this issue,a new approach called density-based support vector data description( DBSVDD) is proposed. In this article,an algorithm using Gaussian mixture model( GMM) with the DBSVDD technique is proposed for process monitoring. The GMM method is used to obtain the center of each mode and determine the number of the modes. Considering the complexity of the data distribution and discrete samples in monitoring process,the DBSVDD is utilized for process monitoring. Finally,the validity and effectiveness of the DBSVDD method are illustrated through the Tennessee Eastman( TE) process.展开更多
This project is mainly focused to develop system for animal researchers & wild life photographers to overcome so many challenges in their day life today. When they engage in such situation, they need to be patient...This project is mainly focused to develop system for animal researchers & wild life photographers to overcome so many challenges in their day life today. When they engage in such situation, they need to be patiently waiting for long hours, maybe several days in whatever location and under severe weather conditions until capturing what they are interested in. Also there is a big demand for rare wild life photo graphs. The proposed method makes the task automatically use microcontroller controlled camera, image processing and machine learning techniques. First with the aid of microcontroller and four passive IR sensors system will automatically detect the presence of animal and rotate the camera toward that direction. Then the motion detection algorithm will get the animal into middle of the frame and capture by high end auto focus web cam. Then the captured images send to the PC and are compared with photograph database to check whether the animal is exactly the same as the photographer choice. If that captured animal is the exactly one who need to capture then it will automatically capture more. Though there are several technologies available none of these are capable of recognizing what it captures. There is no detection of animal presence in different angles. Most of available equipment uses a set of PIR sensors and whatever it disturbs the IR field will automatically be captured and stored. Night time images are black and white and have less details and clarity due to infrared flash quality. If the infrared flash is designed for best image quality, range will be sacrificed. The photographer might be interested in a specific animal but there is no facility to recognize automatically whether captured animal is the photographer’s choice or not.展开更多
In this article, we propose two control charts namely, the “Multivariate Group Runs’ (MV-GR-M)” and the “Multivariate Modified Group Runs’ (MV-MGR-M)” control charts, based on the multivariate normal processes, ...In this article, we propose two control charts namely, the “Multivariate Group Runs’ (MV-GR-M)” and the “Multivariate Modified Group Runs’ (MV-MGR-M)” control charts, based on the multivariate normal processes, for monitoring the process mean vector. Methods to obtain the design parameters and operations of these control charts are discussed. Performances of the proposed charts are compared with some existing control charts. It is verified that, the proposed charts give a significant reduction in the out-of-control “Average Time to Signal” (ATS) in the zero state, as well in the steady state compared to the Hotelling’s T2 and the synthetic T2 control charts.展开更多
In this paper, the definition of the vector FIGARCH process is established, and the stationarity and some properties of the process are discussed. According to the stationarity and the results of Du and Zhang [1], we ...In this paper, the definition of the vector FIGARCH process is established, and the stationarity and some properties of the process are discussed. According to the stationarity and the results of Du and Zhang [1], we verify the persistence in variance of the vector FIGARCH process, and finally establish the sufficient and necessary condition for the co-persistence in the variance of the process and also discuss the constant related vector FIGARCH ( p , d , q ) process as a special case.展开更多
One of the critical hurdles, and breakthroughs, in the field of Natural Language Processing (NLP) in the last two decades has been the development of techniques for text representation that solves the so-called curse ...One of the critical hurdles, and breakthroughs, in the field of Natural Language Processing (NLP) in the last two decades has been the development of techniques for text representation that solves the so-called curse of dimensionality, a problem which plagues NLP in general given that the feature set for learning starts as a function of the size of the language in question, upwards of hundreds of thousands of terms typically. As such, much of the research and development in NLP in the last two decades has been in finding and optimizing solutions to this problem, to feature selection in NLP effectively. This paper looks at the development of these various techniques, leveraging a variety of statistical methods which rest on linguistic theories that were advanced in the middle of the last century, namely the distributional hypothesis which suggests that words that are found in similar contexts generally have similar meanings. In this survey paper we look at the development of some of the most popular of these techniques from a mathematical as well as data structure perspective, from Latent Semantic Analysis to Vector Space Models to their more modern variants which are typically referred to as word embeddings. In this review of algoriths such as Word2Vec, GloVe, ELMo and BERT, we explore the idea of semantic spaces more generally beyond applicability to NLP.展开更多
The geological data are constructed in vector format in geographical information system (GIS) while other data such as remote sensing images, geographical data and geochemical data are saved in raster ones. This paper...The geological data are constructed in vector format in geographical information system (GIS) while other data such as remote sensing images, geographical data and geochemical data are saved in raster ones. This paper converts the vector data into 8 bit images according to their importance to mineralization each by programming. We can communicate the geological meaning with the raster images by this method. The paper also fuses geographical data and geochemical data with the programmed strata data. The result shows that image fusion can express different intensities effectively and visualize the structure characters in 2 dimensions. Furthermore, it also can produce optimized information from multi-source data and express them more directly.展开更多
In this paper, by considering the stochastic proces s of the busy period and the idle period, and introducing the unfinished work as a supplementary variable, a new vector Markov process was presented to study th e M/...In this paper, by considering the stochastic proces s of the busy period and the idle period, and introducing the unfinished work as a supplementary variable, a new vector Markov process was presented to study th e M/G/1 queue again. Through establishing and solving the density evolution equa tions, the busy-period distribution, and the stationary distributions of waitin g time and queue length were obtained. In addition, the stability condition of th is queue system was given by means of an imbedded renewal process.展开更多
基金Characteristic Innovation Projects of Ordinary Universities of Guangdong Province,China(No.2022KTSCX150)Zhaoqing Education Development Institute Project,China(No.ZQJYY2021144)Zhaoqing College Quality Project and Teaching Reform Project,China(Nos.zlgc202003 and zlgc202112)。
文摘The paper discusses the statistical inference problem of the compound Poisson vector process(CPVP)in the domain of attraction of normal law but with infinite covariance matrix.The empirical likelihood(EL)method to construct confidence regions for the mean vector has been proposed.It is a generalization from the finite second-order moments to the infinite second-order moments in the domain of attraction of normal law.The log-empirical likelihood ratio statistic for the average number of the CPVP converges to F distribution in distribution when the population is in the domain of attraction of normal law but has infinite covariance matrix.Some simulation results are proposed to illustrate the method of the paper.
基金National Natural Science Foundation of China(No.60504033)
文摘An iterative (run-to-run) optimization method was presented for batch processes under input constraints. Generally it is very difficult to acquire an accurate mechanistic model for a batch process.Because support vector machine is powerful for the problems characterized by small samples,nonlinearity, high dimension and local minima, support vector regression models were developed for the end-point optimization of batch processes. Since there is no analytical way to find the optimal trajectory, an iterative method was used to exploit the repetitive nature of batch processes to determine the optimal operating policy. The optimization algorithm is proved convergent. The numerical simulation shows that the method can improve the process performance through iterations.
基金Sponsored by the Research Foundation of Beijing Institute of Technology (20080642001)
文摘A new fault-diagnosis method to be used in batch processes based on multi-phase regression is presented to overcome the difficulty arising in the processes due to non-uniform sample data in each phase.Support vector machine is first used for phase identification,and for each phase,improved artificial immune network is developed to analyze and recognize fault patterns.A new cell elimination role is proposed to enhance the incremental clustering capability of the immune network.The proposed method has been applied to glutamic acid fermentation,comparison results have indicated that the proposed approach can better classify fault samples and yield higher diagnosis precision.
基金supported by the National Natural Science Foundation of China(No.U1960202)。
文摘With the development of automation and informatization in the steelmaking industry,the human brain gradually fails to cope with an increasing amount of data generated during the steelmaking process.Machine learning technology provides a new method other than production experience and metallurgical principles in dealing with large amounts of data.The application of machine learning in the steelmaking process has become a research hotspot in recent years.This paper provides an overview of the applications of machine learning in the steelmaking process modeling involving hot metal pretreatment,primary steelmaking,secondary refining,and some other aspects.The three most frequently used machine learning algorithms in steelmaking process modeling are the artificial neural network,support vector machine,and case-based reasoning,demonstrating proportions of 56%,14%,and 10%,respectively.Collected data in the steelmaking plants are frequently faulty.Thus,data processing,especially data cleaning,is crucially important to the performance of machine learning models.The detection of variable importance can be used to optimize the process parameters and guide production.Machine learning is used in hot metal pretreatment modeling mainly for endpoint S content prediction.The predictions of the endpoints of element compositions and the process parameters are widely investigated in primary steelmaking.Machine learning is used in secondary refining modeling mainly for ladle furnaces,Ruhrstahl–Heraeus,vacuum degassing,argon oxygen decarburization,and vacuum oxygen decarburization processes.Further development of machine learning in the steelmaking process modeling can be realized through additional efforts in the construction of the data platform,the industrial transformation of the research achievements to the practical steelmaking process,and the improvement of the universality of the machine learning models.
文摘A novel method for developing a reliable data driven soft sensor to improve the prediction accuracy of sulfur content in hydrodesulfurization(HDS) process was proposed. Therefore, an integrated approach using support vector regression(SVR) based on wavelet transform(WT) and principal component analysis(PCA) was used. Experimental data from the HDS setup were employed to validate the proposed model. The results reveal that the integrated WT-PCA with SVR model was able to increase the prediction accuracy of SVR model. Implementation of the proposed model delivers the best satisfactory predicting performance(EAARE=0.058 and R2=0.97) in comparison with SVR. The obtained results indicate that the proposed model is more reliable and more precise than the multiple linear regression(MLR), SVR and PCA-SVR.
基金financial support for this work from National Natural Science Foundation of China(21978150,21706143)。
文摘Methanol to olefin(MTO)technology provides the opportunity to produce olefins from nonpetroleum sources such as coal,biomass and natural gas.More than 20 commercial MTO plants have been put into operation.Till now,contributions on optimal operation of industrial MTO plants from a process systems engineering perspective are rare.Based on relevance vector machine(RVM),a data-driven framework for optimal operation of the industrial MTO process is established to fully utilize the plentiful industrial data sets.RVM correlates the yield distribution prediction of main products and the operation conditions.These correlations then serve as the constraints for the multi-objective optimization model to pursue the optimal operation of the plant.Nondominated sorting genetic algorithmⅡis used to solve the optimization problem.Comprehensive tests demonstrate that the ethylene yield is effectively improved based on the proposed framework.Since RVM does provide the distribution prediction instead of point estimation,the established model is expected to provide guidance for actual production operations under uncertainty.
基金The authors extend their appreciation to the Deanship of Scientific Research at King Khalid University for funding this work under grant number(RGP 2/158/43)Princess Nourah bint Abdulrahman University Researchers Supporting Project number(PNURSP2022R161)Princess Nourah bint Abdulrahman University,Riyadh,Saudi Arabia.
文摘Big health data collection and storing for further analysis is a challenging task because this knowledge is big and has many features.Several cloud-based IoT health providers have been described in the literature previously.Furthermore,there are a number of issues related to time consumed and overall network performance when it comes to big data information.In the existing method,less performed optimization algorithms were used for optimizing the data.In the proposed method,the Chaotic Cuckoo Optimization algorithm was used for feature selection,and Convolutional Support Vector Machine(CSVM)was used.The research presents a method for analyzing healthcare information that uses in future prediction.The major goal is to take a variety of data while improving efficiency and minimizing process time.The suggested method employs a hybrid method that is divided into two stages.In the first stage,it reduces the features by using the Chaotic Cuckoo Optimization algorithm with Levy flight,opposition-based learning,and distributor operator.In the second stage,CSVM is used which combines the benefits of convolutional neural network(CNN)and SVM.The CSVM modifies CNN’s convolution product to learn hidden deep inside data sources.For improved economic flexibility,greater protection,greater analytics with confidentiality,and lower operating cost,the suggested approach is built on fog computing.Overall results of the experiments show that the suggested method can minimize the number of features in the datasets,enhances the accuracy by 82%,and decrease the time of the process.
文摘On-line monitoring and fault diagnosis of chemical process is extremely important for operation safety and product quality. Principal component analysis (PCA) has been widely used in multivariate statistical process monitoring for its ability to reduce processes dimensions. PCA and other statistical techniques, however, have difficulties in differentiating faults correctly in complex chemical process. Support vector machine (SVM) is a novel approach based on statistical learning theory, which has emerged for feature identification and classification. In this paper, an integrated method is applied for process monitoring and fault diagnosis, which combines PCA for fault feature extraction and multiple SVMs for identification of different fault sources. This approach is verified and illustrated on the Tennessee Eastman benchmark process as a case study. Results show that the proposed PCA-SVMs method has good diagnosis capability and overall diagnosis correctness rate.
基金National Natural Science Foundation of China(No.61374140)the Youth Foundation of National Natural Science Foundation of China(No.61403072)
文摘Complex industry processes often need multiple operation modes to meet the change of production conditions. In the same mode,there are discrete samples belonging to this mode. Therefore,it is important to consider the samples which are sparse in the mode.To solve this issue,a new approach called density-based support vector data description( DBSVDD) is proposed. In this article,an algorithm using Gaussian mixture model( GMM) with the DBSVDD technique is proposed for process monitoring. The GMM method is used to obtain the center of each mode and determine the number of the modes. Considering the complexity of the data distribution and discrete samples in monitoring process,the DBSVDD is utilized for process monitoring. Finally,the validity and effectiveness of the DBSVDD method are illustrated through the Tennessee Eastman( TE) process.
文摘This project is mainly focused to develop system for animal researchers & wild life photographers to overcome so many challenges in their day life today. When they engage in such situation, they need to be patiently waiting for long hours, maybe several days in whatever location and under severe weather conditions until capturing what they are interested in. Also there is a big demand for rare wild life photo graphs. The proposed method makes the task automatically use microcontroller controlled camera, image processing and machine learning techniques. First with the aid of microcontroller and four passive IR sensors system will automatically detect the presence of animal and rotate the camera toward that direction. Then the motion detection algorithm will get the animal into middle of the frame and capture by high end auto focus web cam. Then the captured images send to the PC and are compared with photograph database to check whether the animal is exactly the same as the photographer choice. If that captured animal is the exactly one who need to capture then it will automatically capture more. Though there are several technologies available none of these are capable of recognizing what it captures. There is no detection of animal presence in different angles. Most of available equipment uses a set of PIR sensors and whatever it disturbs the IR field will automatically be captured and stored. Night time images are black and white and have less details and clarity due to infrared flash quality. If the infrared flash is designed for best image quality, range will be sacrificed. The photographer might be interested in a specific animal but there is no facility to recognize automatically whether captured animal is the photographer’s choice or not.
文摘In this article, we propose two control charts namely, the “Multivariate Group Runs’ (MV-GR-M)” and the “Multivariate Modified Group Runs’ (MV-MGR-M)” control charts, based on the multivariate normal processes, for monitoring the process mean vector. Methods to obtain the design parameters and operations of these control charts are discussed. Performances of the proposed charts are compared with some existing control charts. It is verified that, the proposed charts give a significant reduction in the out-of-control “Average Time to Signal” (ATS) in the zero state, as well in the steady state compared to the Hotelling’s T2 and the synthetic T2 control charts.
基金Funded by the Natural Science Foundation of China (No. 70471050).
文摘In this paper, the definition of the vector FIGARCH process is established, and the stationarity and some properties of the process are discussed. According to the stationarity and the results of Du and Zhang [1], we verify the persistence in variance of the vector FIGARCH process, and finally establish the sufficient and necessary condition for the co-persistence in the variance of the process and also discuss the constant related vector FIGARCH ( p , d , q ) process as a special case.
文摘One of the critical hurdles, and breakthroughs, in the field of Natural Language Processing (NLP) in the last two decades has been the development of techniques for text representation that solves the so-called curse of dimensionality, a problem which plagues NLP in general given that the feature set for learning starts as a function of the size of the language in question, upwards of hundreds of thousands of terms typically. As such, much of the research and development in NLP in the last two decades has been in finding and optimizing solutions to this problem, to feature selection in NLP effectively. This paper looks at the development of these various techniques, leveraging a variety of statistical methods which rest on linguistic theories that were advanced in the middle of the last century, namely the distributional hypothesis which suggests that words that are found in similar contexts generally have similar meanings. In this survey paper we look at the development of some of the most popular of these techniques from a mathematical as well as data structure perspective, from Latent Semantic Analysis to Vector Space Models to their more modern variants which are typically referred to as word embeddings. In this review of algoriths such as Word2Vec, GloVe, ELMo and BERT, we explore the idea of semantic spaces more generally beyond applicability to NLP.
文摘The geological data are constructed in vector format in geographical information system (GIS) while other data such as remote sensing images, geographical data and geochemical data are saved in raster ones. This paper converts the vector data into 8 bit images according to their importance to mineralization each by programming. We can communicate the geological meaning with the raster images by this method. The paper also fuses geographical data and geochemical data with the programmed strata data. The result shows that image fusion can express different intensities effectively and visualize the structure characters in 2 dimensions. Furthermore, it also can produce optimized information from multi-source data and express them more directly.
基金Project supported by the National Natural Science Foundation of China(Grant No.70171059)
文摘In this paper, by considering the stochastic proces s of the busy period and the idle period, and introducing the unfinished work as a supplementary variable, a new vector Markov process was presented to study th e M/G/1 queue again. Through establishing and solving the density evolution equa tions, the busy-period distribution, and the stationary distributions of waitin g time and queue length were obtained. In addition, the stability condition of th is queue system was given by means of an imbedded renewal process.