With the popularity of smart handheld devices, mobile streaming video has multiplied the global network traffic in recent years. A huge concern of users' quality of experience(Qo E) has made rate adaptation method...With the popularity of smart handheld devices, mobile streaming video has multiplied the global network traffic in recent years. A huge concern of users' quality of experience(Qo E) has made rate adaptation methods very attractive. In this paper, we propose a two-phase rate adaptation strategy to improve users' real-time video Qo E. First, to measure and assess video Qo E, we provide a continuous Qo E prediction engine modeled by RNN recurrent neural network. Different from traditional Qo E models which consider the Qo E-aware factors separately or incompletely, our RNN-Qo E model accounts for three descriptive factors(video quality, rebuffering, and rate change) and reflects the impact of cognitive memory and recency. Besides, the video playing is separated into the initial startup phase and the steady playback phase, and we takes different optimization goals for each phase: the former aims at shortening the startup delay while the latter ameliorates the video quality and the rebufferings. Simulation results have shown that RNN-Qo E can follow the subjective Qo E quite well, and the proposed strategy can effectively reduce the occurrence of rebufferings caused by the mismatch between the requested video rates and the fluctuated throughput and attains standout performance on real-time Qo E compared with classical rate adaption methods.展开更多
Image/video stitching is a technology for solving the field of view(FOV)limitation of images/videos.It stitches multiple overlapping images/videos to generate a wide-FOV image/video,and has been used in various fields...Image/video stitching is a technology for solving the field of view(FOV)limitation of images/videos.It stitches multiple overlapping images/videos to generate a wide-FOV image/video,and has been used in various fields such as sports broadcasting,video surveillance,street view,and entertainment.This survey reviews image/video stitching algorithms,with a particular focus on those developed in recent years.Image stitching first calculates the corresponding relationships between multiple overlapping images,deforms and aligns the matched images,and then blends the aligned images to generate a wide-FOV image.A seamless method is always adopted to eliminate such potential flaws as ghosting and blurring caused by parallax or objects moving across the overlapping regions.Video stitching is the further extension of image stitching.It usually stitches selected frames of original videos to generate a stitching template by performing image stitching algorithms,and the subsequent frames can then be stitched according to the template.Video stitching is more complicated with moving objects or violent camera movement,because these factors introduce jitter,shakiness,ghosting,and blurring.Foreground detection technique is usually combined into stitching to eliminate ghosting and blurring,while video stabilization algorithms are adopted to solve the jitter and shakiness.This paper further discusses panoramic stitching as a special-extension of image/video stitching.Panoramic stitching is currently the most widely used application in stitching.This survey reviews the latest image/video stitching methods,and introduces the fundamental principles/advantages/weaknesses of image/video stitching algorithms.Image/video stitching faces long-term challenges such as wide baseline,large parallax,and low-texture problem in the overlapping region.New technologies may present new opportunities to address these issues,such as deep learning-based semantic correspondence,and 3D image stitching.Finally,this survey discusses the challenges of image/video stitching and proposes potential solutions.展开更多
With the increasing popularity of solid sate lighting devices, Visible Light Communication (VLC) is globally recognized as an advanced and promising technology to realize short-range, high speed as well as large capac...With the increasing popularity of solid sate lighting devices, Visible Light Communication (VLC) is globally recognized as an advanced and promising technology to realize short-range, high speed as well as large capacity wireless data transmission. In this paper, we propose a prototype of real-time audio and video broadcast system using inexpensive commercially available light emitting diode (LED) lamps. Experimental results show that real-time high quality audio and video with the maximum distance of 3 m can be achieved through proper layout of LED sources and improvement of concentration effects. Lighting model within room environment is designed and simulated which indicates close relationship between layout of light sources and distribution of illuminance.展开更多
In recent years,real-time video streaming has grown in popularity.The growing popularity of the Internet of Things(IoT)and other wireless heterogeneous networks mandates that network resources be carefully apportioned...In recent years,real-time video streaming has grown in popularity.The growing popularity of the Internet of Things(IoT)and other wireless heterogeneous networks mandates that network resources be carefully apportioned among versatile users in order to achieve the best Quality of Experience(QoE)and performance objectives.Most researchers focused on Forward Error Correction(FEC)techniques when attempting to strike a balance between QoE and performance.However,as network capacity increases,the performance degrades,impacting the live visual experience.Recently,Deep Learning(DL)algorithms have been successfully integrated with FEC to stream videos across multiple heterogeneous networks.But these algorithms need to be changed to make the experience better without sacrificing packet loss and delay time.To address the previous challenge,this paper proposes a novel intelligent algorithm that streams video in multi-home heterogeneous networks based on network-centric characteristics.The proposed framework contains modules such as Intelligent Content Extraction Module(ICEM),Channel Status Monitor(CSM),and Adaptive FEC(AFEC).This framework adopts the Cognitive Learning-based Scheduling(CLS)Module,which works on the deep Reinforced Gated Recurrent Networks(RGRN)principle and embeds them along with the FEC to achieve better performances.The complete framework was developed using the Objective Modular Network Testbed in C++(OMNET++),Internet networking(INET),and Python 3.10,with Keras as the front end and Tensorflow 2.10 as the back end.With extensive experimentation,the proposed model outperforms the other existing intelligentmodels in terms of improving the QoE,minimizing the End-to-End Delay(EED),and maintaining the highest accuracy(98%)and a lower Root Mean Square Error(RMSE)value of 0.001.展开更多
A novel bandwidth prediction and control scheme is proposed for video transmission over an ad boc network. The scheme is based on cross-layer, feedback, and Bayesian network techniques. The impacts of video quality ar...A novel bandwidth prediction and control scheme is proposed for video transmission over an ad boc network. The scheme is based on cross-layer, feedback, and Bayesian network techniques. The impacts of video quality are formulized and deduced. The relevant factors are obtained by a cross-layer mechanism or Feedback method. According to these relevant factors, the variable set and the Bayesian network topology are determined. Then a Bayesian network prediction model is constructed. The results of the prediction can be used as the bandwidth of the mobile ad hoc network (MANET). According to the bandwidth, the video encoder is controlled to dynamically adjust and encode the right bit rates of a real-time video stream. Integrated simulation of a video streaming communication system is implemented to validate the proposed solution. In contrast to the conventional transfer scheme, the results of the experiment indicate that the proposed scheme can make the best use of the network bandwidth; there are considerable improvements in the packet loss and the visual quality of real-time video.K展开更多
The scalable extension of H.264/AVC, known as scalable video coding or SVC, is currently the main focus of the Joint Video Team’s work. In its present working draft, the higher level syntax of SVC follows the design ...The scalable extension of H.264/AVC, known as scalable video coding or SVC, is currently the main focus of the Joint Video Team’s work. In its present working draft, the higher level syntax of SVC follows the design principles of H.264/AVC. Self-contained network abstraction layer units (NAL units) form natural entities for packetization. The SVC specification is by no means finalized yet, but nevertheless the work towards an optimized RTP payload format has already started. RFC 3984, the RTP payload specification for H.264/AVC has been taken as a starting point, but it became quickly clear that the scalable features of SVC require adaptation in at least the areas of capability/operation point signaling and documentation of the extended NAL unit header. This paper first gives an overview of the history of scalable video coding, and then reviews the video coding layer (VCL) and NAL of the latest SVC draft specification. Finally, it discusses different aspects of the draft SVC RTP payload format, in- cluding the design criteria, use cases, signaling and payload structure.展开更多
Abts ract A wireless mutl i-hop videot ransmission experiment system is designed and implemented for vehiculra ad-hoc networks VANET and the rt ansm ission control protocol and routing protocol are proposed. This syst...Abts ract A wireless mutl i-hop videot ransmission experiment system is designed and implemented for vehiculra ad-hoc networks VANET and the rt ansm ission control protocol and routing protocol are proposed. This system in tegrates the embedded Linux system witha n ARM kernel and oc ns ists of a S3C6410 main control module a wirel ss local arean etwork WLAN card a LCD screne and so on.In the scenario of a wireless multi-hop video transmission both the H.264 and JPEG are used and their performances such as the compression rate delay and frame loss rate are analyzed in theory andc ompared in the experiment.The system is tested in the real indoor and outdoor environment.The results show that the scheme of the multi-hop video transmission experiment system can be applicable for VANET and multiple scenes and the transmission control protocol and routing protocol proposed can achieve real-time transmission and meet multi-hop requirements.展开更多
Automatic video mosaicking is a challenging task in computer vision. Current researches consider either panoramic or mapping tasks on short videos. In this paper, an automatic mosaicking algorithm is proposed for both...Automatic video mosaicking is a challenging task in computer vision. Current researches consider either panoramic or mapping tasks on short videos. In this paper, an automatic mosaicking algorithm is proposed for both mapping and panoramic tasks based on the adapted key-frame on videos of any length.The speeded up robust features(SURF) and the grid motion statistic(GMS) algorithm are used for feature extraction and matching between consecutive frames, which are used to compute the transformation. In order to reduce the influence of the accumulated error during image stitching, an evaluation metric is put forward for the transformation matrix. Besides, a self-growth method is employed to stitch the global image for long videos. The algorithm is evaluated by using aerial-view and panoramic videos respectively on the graphic processing unit(GPU) device, which can satisfy the real-time requirement. The experimental results demonstrate that the proposed algorithm is able to achieve a better performance than the state-of-art.展开更多
360 video streaming services over the network are becoming popular. In particular, it is easy to experience 360 video through the already popular smartphone. However, due to the nature of 360 video, it is difficult to...360 video streaming services over the network are becoming popular. In particular, it is easy to experience 360 video through the already popular smartphone. However, due to the nature of 360 video, it is difficult to provide stable streaming service in general network environment because the size of data to send is larger than that of conventional video. Also, the real user's viewing area is very small compared to the sending amount. In this paper, we propose a system that can provide high quality 360 video streaming services to the users more efficiently in the cloud. In particular, we propose a streaming system focused on using a head mount display (HMD).展开更多
Automated live video stream analytics has been extensively researched in recent times.Most of the traditional methods for video anomaly detection is supervised and use a single classifier to identify an anomaly in a f...Automated live video stream analytics has been extensively researched in recent times.Most of the traditional methods for video anomaly detection is supervised and use a single classifier to identify an anomaly in a frame.We propose a 3-stage ensemble-based unsupervised deep reinforcement algorithm with an underlying Long Short Term Memory(LSTM)based Recurrent Neural Network(RNN).In the first stage,an ensemble of LSTM-RNNs are deployed to generate the anomaly score.The second stage uses the least square method for optimal anomaly score generation.The third stage adopts award-based reinforcement learning to update the model.The proposed Hybrid Ensemble RR Model was tested on standard pedestrian datasets UCSDPed1,USDPed2.The data set has 70 videos in UCSD Ped1 and 28 videos in UCSD Ped2 with a total of 18560 frames.Since a real-time stream has strict memory constraints and storage issues,a simple computing machine does not suffice in performing analytics with stream data.Hence the proposed research is designed to work on a GPU(Graphics Processing Unit),TPU(Tensor Processing Unit)supported framework.As shown in the experimental results section,recorded observations on framelevel EER(Equal Error Rate)and AUC(Area Under Curve)showed a 9%reduction in EER in UCSD Ped1,a 13%reduction in ERR in UCSD Ped2 and a 4%improvement in accuracy in both datasets.展开更多
An Augmented virtual environment(AVE)is concerned with the fusion of real-time video with 3D models or scenes so as to augment the virtual environment.In this paper,a new approach to establish an AVE with a wide field...An Augmented virtual environment(AVE)is concerned with the fusion of real-time video with 3D models or scenes so as to augment the virtual environment.In this paper,a new approach to establish an AVE with a wide field of view is proposed,including real-time video projection,multiple video texture fusion and 3D visualization of moving objects.A new diagonally weighted algorithm is proposed to smooth the apparent gaps within the overlapping area between the two adjacent videos.A visualization method for the location and trajectory of a moving virtual object is proposed to display the moving object and its trajectory in the 3D virtual environment.The experimental results showed that the proposed set of algorithms are able to fuse multiple real-time videos with 3D models efficiently,and the experiment runs a 3D scene containing two million triangles and six real-time videos at around 55 frames per second on a laptop with 1GB of graphics card memory.In addition,a realistic AVE with a wide field of view was created based on the Digital Earth Science Platform by fusing three videos with a complex indoor virtual scene,visualizing a moving object and drawing its trajectory in the real time.展开更多
Real-time variable bit rate(VBR) video is expected to take a significant portion of multimedia applications.However,plentiful challenges to VBR video service provision have been raised for its characteristic of high...Real-time variable bit rate(VBR) video is expected to take a significant portion of multimedia applications.However,plentiful challenges to VBR video service provision have been raised for its characteristic of high traffic abruptness.To support multi-user real-time VBR video transmission with high bandwidth utilization and satisfied quality of service(QoS),this article proposes a practical dynamic bandwidth management scheme.This scheme forecasts future media rate of VBR video by employing time-domain adaptive linear predictor and using media delivery index(MDI) as both QoS measurement and complementary management reference.In addition,to support multi-user application,an adjustment priorities classified strategy is also put forward.Finally,a test-bed based on this management scheme is established.The experimental results demonstrate that the scheme proposed in this article is efficient with bandwidth utilization increased by 20%-60% compared to a fixed service rate and QoS guaranteed.展开更多
基金supported by the National Nature Science Foundation of China(NSFC 60622110,61471220,91538107,91638205)National Basic Research Project of China(973,2013CB329006),GY22016058
文摘With the popularity of smart handheld devices, mobile streaming video has multiplied the global network traffic in recent years. A huge concern of users' quality of experience(Qo E) has made rate adaptation methods very attractive. In this paper, we propose a two-phase rate adaptation strategy to improve users' real-time video Qo E. First, to measure and assess video Qo E, we provide a continuous Qo E prediction engine modeled by RNN recurrent neural network. Different from traditional Qo E models which consider the Qo E-aware factors separately or incompletely, our RNN-Qo E model accounts for three descriptive factors(video quality, rebuffering, and rate change) and reflects the impact of cognitive memory and recency. Besides, the video playing is separated into the initial startup phase and the steady playback phase, and we takes different optimization goals for each phase: the former aims at shortening the startup delay while the latter ameliorates the video quality and the rebufferings. Simulation results have shown that RNN-Qo E can follow the subjective Qo E quite well, and the proposed strategy can effectively reduce the occurrence of rebufferings caused by the mismatch between the requested video rates and the fluctuated throughput and attains standout performance on real-time Qo E compared with classical rate adaption methods.
基金the National Natural Science Foundation of China(61872023).
文摘Image/video stitching is a technology for solving the field of view(FOV)limitation of images/videos.It stitches multiple overlapping images/videos to generate a wide-FOV image/video,and has been used in various fields such as sports broadcasting,video surveillance,street view,and entertainment.This survey reviews image/video stitching algorithms,with a particular focus on those developed in recent years.Image stitching first calculates the corresponding relationships between multiple overlapping images,deforms and aligns the matched images,and then blends the aligned images to generate a wide-FOV image.A seamless method is always adopted to eliminate such potential flaws as ghosting and blurring caused by parallax or objects moving across the overlapping regions.Video stitching is the further extension of image stitching.It usually stitches selected frames of original videos to generate a stitching template by performing image stitching algorithms,and the subsequent frames can then be stitched according to the template.Video stitching is more complicated with moving objects or violent camera movement,because these factors introduce jitter,shakiness,ghosting,and blurring.Foreground detection technique is usually combined into stitching to eliminate ghosting and blurring,while video stabilization algorithms are adopted to solve the jitter and shakiness.This paper further discusses panoramic stitching as a special-extension of image/video stitching.Panoramic stitching is currently the most widely used application in stitching.This survey reviews the latest image/video stitching methods,and introduces the fundamental principles/advantages/weaknesses of image/video stitching algorithms.Image/video stitching faces long-term challenges such as wide baseline,large parallax,and low-texture problem in the overlapping region.New technologies may present new opportunities to address these issues,such as deep learning-based semantic correspondence,and 3D image stitching.Finally,this survey discusses the challenges of image/video stitching and proposes potential solutions.
文摘With the increasing popularity of solid sate lighting devices, Visible Light Communication (VLC) is globally recognized as an advanced and promising technology to realize short-range, high speed as well as large capacity wireless data transmission. In this paper, we propose a prototype of real-time audio and video broadcast system using inexpensive commercially available light emitting diode (LED) lamps. Experimental results show that real-time high quality audio and video with the maximum distance of 3 m can be achieved through proper layout of LED sources and improvement of concentration effects. Lighting model within room environment is designed and simulated which indicates close relationship between layout of light sources and distribution of illuminance.
文摘In recent years,real-time video streaming has grown in popularity.The growing popularity of the Internet of Things(IoT)and other wireless heterogeneous networks mandates that network resources be carefully apportioned among versatile users in order to achieve the best Quality of Experience(QoE)and performance objectives.Most researchers focused on Forward Error Correction(FEC)techniques when attempting to strike a balance between QoE and performance.However,as network capacity increases,the performance degrades,impacting the live visual experience.Recently,Deep Learning(DL)algorithms have been successfully integrated with FEC to stream videos across multiple heterogeneous networks.But these algorithms need to be changed to make the experience better without sacrificing packet loss and delay time.To address the previous challenge,this paper proposes a novel intelligent algorithm that streams video in multi-home heterogeneous networks based on network-centric characteristics.The proposed framework contains modules such as Intelligent Content Extraction Module(ICEM),Channel Status Monitor(CSM),and Adaptive FEC(AFEC).This framework adopts the Cognitive Learning-based Scheduling(CLS)Module,which works on the deep Reinforced Gated Recurrent Networks(RGRN)principle and embeds them along with the FEC to achieve better performances.The complete framework was developed using the Objective Modular Network Testbed in C++(OMNET++),Internet networking(INET),and Python 3.10,with Keras as the front end and Tensorflow 2.10 as the back end.With extensive experimentation,the proposed model outperforms the other existing intelligentmodels in terms of improving the QoE,minimizing the End-to-End Delay(EED),and maintaining the highest accuracy(98%)and a lower Root Mean Square Error(RMSE)value of 0.001.
基金The National High Technology Research and Development Program of China (863Program) (No.2003AA1Z2130)the Scienceand Technology Project of Zhejiang Province(No.2005C11001-02)
文摘A novel bandwidth prediction and control scheme is proposed for video transmission over an ad boc network. The scheme is based on cross-layer, feedback, and Bayesian network techniques. The impacts of video quality are formulized and deduced. The relevant factors are obtained by a cross-layer mechanism or Feedback method. According to these relevant factors, the variable set and the Bayesian network topology are determined. Then a Bayesian network prediction model is constructed. The results of the prediction can be used as the bandwidth of the mobile ad hoc network (MANET). According to the bandwidth, the video encoder is controlled to dynamically adjust and encode the right bit rates of a real-time video stream. Integrated simulation of a video streaming communication system is implemented to validate the proposed solution. In contrast to the conventional transfer scheme, the results of the experiment indicate that the proposed scheme can make the best use of the network bandwidth; there are considerable improvements in the packet loss and the visual quality of real-time video.K
文摘The scalable extension of H.264/AVC, known as scalable video coding or SVC, is currently the main focus of the Joint Video Team’s work. In its present working draft, the higher level syntax of SVC follows the design principles of H.264/AVC. Self-contained network abstraction layer units (NAL units) form natural entities for packetization. The SVC specification is by no means finalized yet, but nevertheless the work towards an optimized RTP payload format has already started. RFC 3984, the RTP payload specification for H.264/AVC has been taken as a starting point, but it became quickly clear that the scalable features of SVC require adaptation in at least the areas of capability/operation point signaling and documentation of the extended NAL unit header. This paper first gives an overview of the history of scalable video coding, and then reviews the video coding layer (VCL) and NAL of the latest SVC draft specification. Finally, it discusses different aspects of the draft SVC RTP payload format, in- cluding the design criteria, use cases, signaling and payload structure.
基金The National Natural Science Foundation of China(No.61201175,61171081)Transformation Program of Science and Technology Achievements of Jiangsu Province(No.BA2010023)
文摘Abts ract A wireless mutl i-hop videot ransmission experiment system is designed and implemented for vehiculra ad-hoc networks VANET and the rt ansm ission control protocol and routing protocol are proposed. This system in tegrates the embedded Linux system witha n ARM kernel and oc ns ists of a S3C6410 main control module a wirel ss local arean etwork WLAN card a LCD screne and so on.In the scenario of a wireless multi-hop video transmission both the H.264 and JPEG are used and their performances such as the compression rate delay and frame loss rate are analyzed in theory andc ompared in the experiment.The system is tested in the real indoor and outdoor environment.The results show that the scheme of the multi-hop video transmission experiment system can be applicable for VANET and multiple scenes and the transmission control protocol and routing protocol proposed can achieve real-time transmission and meet multi-hop requirements.
基金supported by the National Science Foundation of China(61603040,61973036,61433003)。
文摘Automatic video mosaicking is a challenging task in computer vision. Current researches consider either panoramic or mapping tasks on short videos. In this paper, an automatic mosaicking algorithm is proposed for both mapping and panoramic tasks based on the adapted key-frame on videos of any length.The speeded up robust features(SURF) and the grid motion statistic(GMS) algorithm are used for feature extraction and matching between consecutive frames, which are used to compute the transformation. In order to reduce the influence of the accumulated error during image stitching, an evaluation metric is put forward for the transformation matrix. Besides, a self-growth method is employed to stitch the global image for long videos. The algorithm is evaluated by using aerial-view and panoramic videos respectively on the graphic processing unit(GPU) device, which can satisfy the real-time requirement. The experimental results demonstrate that the proposed algorithm is able to achieve a better performance than the state-of-art.
文摘360 video streaming services over the network are becoming popular. In particular, it is easy to experience 360 video through the already popular smartphone. However, due to the nature of 360 video, it is difficult to provide stable streaming service in general network environment because the size of data to send is larger than that of conventional video. Also, the real user's viewing area is very small compared to the sending amount. In this paper, we propose a system that can provide high quality 360 video streaming services to the users more efficiently in the cloud. In particular, we propose a streaming system focused on using a head mount display (HMD).
文摘Automated live video stream analytics has been extensively researched in recent times.Most of the traditional methods for video anomaly detection is supervised and use a single classifier to identify an anomaly in a frame.We propose a 3-stage ensemble-based unsupervised deep reinforcement algorithm with an underlying Long Short Term Memory(LSTM)based Recurrent Neural Network(RNN).In the first stage,an ensemble of LSTM-RNNs are deployed to generate the anomaly score.The second stage uses the least square method for optimal anomaly score generation.The third stage adopts award-based reinforcement learning to update the model.The proposed Hybrid Ensemble RR Model was tested on standard pedestrian datasets UCSDPed1,USDPed2.The data set has 70 videos in UCSD Ped1 and 28 videos in UCSD Ped2 with a total of 18560 frames.Since a real-time stream has strict memory constraints and storage issues,a simple computing machine does not suffice in performing analytics with stream data.Hence the proposed research is designed to work on a GPU(Graphics Processing Unit),TPU(Tensor Processing Unit)supported framework.As shown in the experimental results section,recorded observations on framelevel EER(Equal Error Rate)and AUC(Area Under Curve)showed a 9%reduction in EER in UCSD Ped1,a 13%reduction in ERR in UCSD Ped2 and a 4%improvement in accuracy in both datasets.
基金Research presented in this paper was funded by the National Key Research and Development Program of China[grant numbers 2016YFB0501503 and 2016YFB0501502]Hainan Provincial Department of Science and Technology[grant number ZDKJ2016021].
文摘An Augmented virtual environment(AVE)is concerned with the fusion of real-time video with 3D models or scenes so as to augment the virtual environment.In this paper,a new approach to establish an AVE with a wide field of view is proposed,including real-time video projection,multiple video texture fusion and 3D visualization of moving objects.A new diagonally weighted algorithm is proposed to smooth the apparent gaps within the overlapping area between the two adjacent videos.A visualization method for the location and trajectory of a moving virtual object is proposed to display the moving object and its trajectory in the 3D virtual environment.The experimental results showed that the proposed set of algorithms are able to fuse multiple real-time videos with 3D models efficiently,and the experiment runs a 3D scene containing two million triangles and six real-time videos at around 55 frames per second on a laptop with 1GB of graphics card memory.In addition,a realistic AVE with a wide field of view was created based on the Digital Earth Science Platform by fusing three videos with a complex indoor virtual scene,visualizing a moving object and drawing its trajectory in the real time.
基金supported by the National Basic Research Program of China (2007CB310705)the National Natural Science Foundation of China (60772024, 60711140087)+4 种基金the Hi-Tech Research and Development Program of China (2007AA01Z255)the NCET (06-0090)the PCSIRT (IRT0609)the ISTCP (2006DFA11040)the 111 Project of China (B07005)
文摘Real-time variable bit rate(VBR) video is expected to take a significant portion of multimedia applications.However,plentiful challenges to VBR video service provision have been raised for its characteristic of high traffic abruptness.To support multi-user real-time VBR video transmission with high bandwidth utilization and satisfied quality of service(QoS),this article proposes a practical dynamic bandwidth management scheme.This scheme forecasts future media rate of VBR video by employing time-domain adaptive linear predictor and using media delivery index(MDI) as both QoS measurement and complementary management reference.In addition,to support multi-user application,an adjustment priorities classified strategy is also put forward.Finally,a test-bed based on this management scheme is established.The experimental results demonstrate that the scheme proposed in this article is efficient with bandwidth utilization increased by 20%-60% compared to a fixed service rate and QoS guaranteed.