A method to detect traffic dangers based on visual attention model of sparse sampling was proposed. The hemispherical sparse sampling model was used to decrease the amount of calculation which increases the detection ...A method to detect traffic dangers based on visual attention model of sparse sampling was proposed. The hemispherical sparse sampling model was used to decrease the amount of calculation which increases the detection speed. Bayesian probability model and Gaussian kernel function were applied to calculate the saliency of traffic videos. The method of multiscale saliency was used and the final saliency was the average of all scales, which increased the detection rates extraordinarily. The detection results of several typical traffic dangers show that the proposed method has higher detection rates and speed, which meets the requirement of real-time detection of traffic dangers.展开更多
Based on the good localization characteristic of the wavelet transform both in time and frequency domain, a de-noising method based on wavelet transform is presented, which can make the extraction of visual evoked pot...Based on the good localization characteristic of the wavelet transform both in time and frequency domain, a de-noising method based on wavelet transform is presented, which can make the extraction of visual evoked potentials in single training sample from the EEG background noise in favor of studying the changes between the single sample response happen. The information is probably related with the different function, appearance and pathologies of the brain. At the same time this method can also be used to remove those signal’s artifacts that do not appear with EP within the same scope of time or frequency. The traditional Fourier filter can hardly attain the similar result. This method is different from other wavelet de-noising methods in which different criteria are employed in choosing wavelet coefficient. It has a biggest virtue of noting the differences among the single training sample and making use of the characteristics of high time frequency resolution to reduce the effect of interference factors to a maximum extent within the time scope that EP appear. The experiment result proves that this method is not restricted by the signal-to-noise ratio of evoked potential and electroencephalograph (EEG) and even can recognize instantaneous event under the condition of lower signal-to-noise ratio, as well as recognize the samples which evoked evident response more easily. Therefore, more evident average evoked response could be achieved by de-nosing the signals obtained through averaging out the samples that can evoke evident responses than de-nosing the average of original signals. In addition, averaging methodology can dramatically reduce the number of record samples needed, thus avoiding the effect of behavior change during the recording process. This methodology pays attention to the differences among single training sample and also accomplishes the extraction of visual evoked potentials from single trainings sample. As a result, system speed and accuracy could be improved to a great extent if this methodology is applied to brain-computer interface system based on evoked responses.展开更多
Target tracking is one of the most important issues in computer vision and has been applied in many fields of science, engineering and industry. Because of the occlusion during tracking, typical approaches with single...Target tracking is one of the most important issues in computer vision and has been applied in many fields of science, engineering and industry. Because of the occlusion during tracking, typical approaches with single classifier learn much of occluding background information which results in the decrease of tracking performance, and eventually lead to the failure of the tracking algorithm. This paper presents a new correlative classifiers approach to address the above problem. Our idea is to derive a group of correlative classifiers based on sample set method. Then we propose strategy to establish the classifiers and to query the suitable classifiers for the next frame tracking. In order to deal with nonlinear problem, particle filter is adopted and integrated with sample set method. For choosing the target from candidate particles, we define a similarity measurement between particles and sample set. The proposed sample set method includes the following steps. First, we cropped positive samples set around the target and negative samples set far away from the target. Second, we extracted average Haar-like feature from these samples and calculate their statistical characteristic which represents the target model. Third, we define the similarity measurement based on the statistical characteristic of these two sets to judge the similarity between candidate particles and target model. Finally, we choose the largest similarity score particle as the target in the new frame. A number of experiments show the robustness and efficiency of the proposed approach when compared with other state-of-the-art trackers.展开更多
Cyber security has been thrust into the limelight in the modern technological era because of an array of attacks often bypassing tmtrained intrusion detection systems (IDSs). Therefore, greater attention has been di...Cyber security has been thrust into the limelight in the modern technological era because of an array of attacks often bypassing tmtrained intrusion detection systems (IDSs). Therefore, greater attention has been directed on being able deciphering better methods for identifying attack types to train IDSs more effectively. Keycyber-attack insights exist in big data; however, an efficient approach is required to determine strong attack types to train IDSs to become more effective in key areas. Despite the rising growth in IDS research, there is a lack of studies involving big data visualization, which is key. The KDD99 data set has served as a strong benchmark since 1999; therefore, we utilized this data set in our experiment. In this study, we utilized hash algorithm, a weight table, and sampling method to deal with the inherent problems caused by analyzing big data; volume, variety, and velocity. By utilizing a visualization algorithm, we were able to gain insights into the KDD99 data set with a clear iden- tification of "normal" clusters and described distinct clusters of effective attacks.展开更多
Estimating the global position of a road vehicle without using GPS is a challenge that many scientists look forward to solving in the near future. Normally, inertial and odometry sensors are used to complement GPS mea...Estimating the global position of a road vehicle without using GPS is a challenge that many scientists look forward to solving in the near future. Normally, inertial and odometry sensors are used to complement GPS measures in an attempt to provide a means for maintaining vehicle odometry during GPS outage. Nonetheless, recent experiments have demonstrated that computer vision can also be used as a valuable source to provide what can be denoted as visual odometry. For this purpose, vehicle motion can be estimated using a non-linear, photogrametric approach based on RAndom SAmple Consensus (RANSAC). The results prove that the detection and selection of relevant feature points is a crucial factor in the global performance of the visual odometry algorithm. The key issues for further improvement are discussed in this letter.展开更多
The purpose of software defect prediction is to identify defect-prone code modules to assist software quality assurance teams with the appropriate allocation of resources and labor.In previous software defect predicti...The purpose of software defect prediction is to identify defect-prone code modules to assist software quality assurance teams with the appropriate allocation of resources and labor.In previous software defect prediction studies,transfer learning was effective in solving the problem of inconsistent project data distribution.However,target projects often lack sufficient data,which affects the performance of the transfer learning model.In addition,the presence of uncorrelated features between projects can decrease the prediction accuracy of the transfer learning model.To address these problems,this article propose a software defect prediction method based on stable learning(SDP-SL)that combines code visualization techniques and residual networks.This method first transforms code files into code images using code visualization techniques and then constructs a defect prediction model based on these code images.During the model training process,target project data are not required as prior knowledge.Following the principles of stable learning,this paper dynamically adjusted the weights of source project samples to eliminate dependencies between features,thereby capturing the“invariance mechanism”within the data.This approach explores the genuine relationship between code defect features and labels,thereby enhancing defect prediction performance.To evaluate the performance of SDP-SL,this article conducted comparative experiments on 10 open-source projects in the PROMISE dataset.The experimental results demonstrated that in terms of the F-measure,the proposed SDP-SL method outperformed other within-project defect prediction methods by 2.11%-44.03%.In cross-project defect prediction,the SDP-SL method provided an improvement of 5.89%-25.46% in prediction performance compared to other cross-project defect prediction methods.Therefore,SDP-SL can effectively enhance within-and cross-project defect predictions.展开更多
Massive sequence view (MSV) is a classic timeline-based dynamic network visualization approach. However, it is vulnerable to visual clutter caused by overlapping edges, thereby leading to unexpected misunderstanding o...Massive sequence view (MSV) is a classic timeline-based dynamic network visualization approach. However, it is vulnerable to visual clutter caused by overlapping edges, thereby leading to unexpected misunderstanding of time-varying trends of network communications. This study presents a new edge sampling algorithm called edge-based multi-class blue noise (E-MCBN) to reduce visual clutter in MSV. Our main idea is inspired by the multi-class blue noise (MCBN) sampling algorithm, commonly used in multi-class scatterplot decluttering. First, we take a node pair as an edge class, which can be regarded as an analogy to classes in multi-class scatterplots. Second, we propose two indicators, namely, class overlap and inter-class conflict degrees, to measure the overlapping degree and mutual exclusion, respectively, between edge classes. These indicators help construct the foundation of migrating the MCBN sampling from multi-class scatterplots to dynamic network samplings. Finally, we propose three strategies to accelerate MCBN sampling and a partitioning strategy to preserve local high-density edges in the MSV. The result shows that our approach can effectively reduce visual clutters and improve the readability of MSV. Moreover, our approach can also overcome the disadvantages of the MCBN sampling (i.e., long-running and failure to preserve local high-density communication areas in MSV). This study is the first that introduces MCBN sampling into a dynamic network sampling.展开更多
基金Project(50808025)supported by the National Natural Science Foundation of ChinaProject(20090162110057)supported by the Doctoral Fund of Ministry of Education of China
文摘A method to detect traffic dangers based on visual attention model of sparse sampling was proposed. The hemispherical sparse sampling model was used to decrease the amount of calculation which increases the detection speed. Bayesian probability model and Gaussian kernel function were applied to calculate the saliency of traffic videos. The method of multiscale saliency was used and the final saliency was the average of all scales, which increased the detection rates extraordinarily. The detection results of several typical traffic dangers show that the proposed method has higher detection rates and speed, which meets the requirement of real-time detection of traffic dangers.
文摘Based on the good localization characteristic of the wavelet transform both in time and frequency domain, a de-noising method based on wavelet transform is presented, which can make the extraction of visual evoked potentials in single training sample from the EEG background noise in favor of studying the changes between the single sample response happen. The information is probably related with the different function, appearance and pathologies of the brain. At the same time this method can also be used to remove those signal’s artifacts that do not appear with EP within the same scope of time or frequency. The traditional Fourier filter can hardly attain the similar result. This method is different from other wavelet de-noising methods in which different criteria are employed in choosing wavelet coefficient. It has a biggest virtue of noting the differences among the single training sample and making use of the characteristics of high time frequency resolution to reduce the effect of interference factors to a maximum extent within the time scope that EP appear. The experiment result proves that this method is not restricted by the signal-to-noise ratio of evoked potential and electroencephalograph (EEG) and even can recognize instantaneous event under the condition of lower signal-to-noise ratio, as well as recognize the samples which evoked evident response more easily. Therefore, more evident average evoked response could be achieved by de-nosing the signals obtained through averaging out the samples that can evoke evident responses than de-nosing the average of original signals. In addition, averaging methodology can dramatically reduce the number of record samples needed, thus avoiding the effect of behavior change during the recording process. This methodology pays attention to the differences among single training sample and also accomplishes the extraction of visual evoked potentials from single trainings sample. As a result, system speed and accuracy could be improved to a great extent if this methodology is applied to brain-computer interface system based on evoked responses.
基金supported by the National Science Foundation of China(61472289)National Key Research and Development Project(2016YFC0106305)The Key Technology R&D Program of Hubei Provence(2014BAA153)
文摘Target tracking is one of the most important issues in computer vision and has been applied in many fields of science, engineering and industry. Because of the occlusion during tracking, typical approaches with single classifier learn much of occluding background information which results in the decrease of tracking performance, and eventually lead to the failure of the tracking algorithm. This paper presents a new correlative classifiers approach to address the above problem. Our idea is to derive a group of correlative classifiers based on sample set method. Then we propose strategy to establish the classifiers and to query the suitable classifiers for the next frame tracking. In order to deal with nonlinear problem, particle filter is adopted and integrated with sample set method. For choosing the target from candidate particles, we define a similarity measurement between particles and sample set. The proposed sample set method includes the following steps. First, we cropped positive samples set around the target and negative samples set far away from the target. Second, we extracted average Haar-like feature from these samples and calculate their statistical characteristic which represents the target model. Third, we define the similarity measurement based on the statistical characteristic of these two sets to judge the similarity between candidate particles and target model. Finally, we choose the largest similarity score particle as the target in the new frame. A number of experiments show the robustness and efficiency of the proposed approach when compared with other state-of-the-art trackers.
文摘Cyber security has been thrust into the limelight in the modern technological era because of an array of attacks often bypassing tmtrained intrusion detection systems (IDSs). Therefore, greater attention has been directed on being able deciphering better methods for identifying attack types to train IDSs more effectively. Keycyber-attack insights exist in big data; however, an efficient approach is required to determine strong attack types to train IDSs to become more effective in key areas. Despite the rising growth in IDS research, there is a lack of studies involving big data visualization, which is key. The KDD99 data set has served as a strong benchmark since 1999; therefore, we utilized this data set in our experiment. In this study, we utilized hash algorithm, a weight table, and sampling method to deal with the inherent problems caused by analyzing big data; volume, variety, and velocity. By utilizing a visualization algorithm, we were able to gain insights into the KDD99 data set with a clear iden- tification of "normal" clusters and described distinct clusters of effective attacks.
文摘Estimating the global position of a road vehicle without using GPS is a challenge that many scientists look forward to solving in the near future. Normally, inertial and odometry sensors are used to complement GPS measures in an attempt to provide a means for maintaining vehicle odometry during GPS outage. Nonetheless, recent experiments have demonstrated that computer vision can also be used as a valuable source to provide what can be denoted as visual odometry. For this purpose, vehicle motion can be estimated using a non-linear, photogrametric approach based on RAndom SAmple Consensus (RANSAC). The results prove that the detection and selection of relevant feature points is a crucial factor in the global performance of the visual odometry algorithm. The key issues for further improvement are discussed in this letter.
基金supported by the NationalNatural Science Foundation of China(Grant No.61867004)the Youth Fund of the National Natural Science Foundation of China(Grant No.41801288).
文摘The purpose of software defect prediction is to identify defect-prone code modules to assist software quality assurance teams with the appropriate allocation of resources and labor.In previous software defect prediction studies,transfer learning was effective in solving the problem of inconsistent project data distribution.However,target projects often lack sufficient data,which affects the performance of the transfer learning model.In addition,the presence of uncorrelated features between projects can decrease the prediction accuracy of the transfer learning model.To address these problems,this article propose a software defect prediction method based on stable learning(SDP-SL)that combines code visualization techniques and residual networks.This method first transforms code files into code images using code visualization techniques and then constructs a defect prediction model based on these code images.During the model training process,target project data are not required as prior knowledge.Following the principles of stable learning,this paper dynamically adjusted the weights of source project samples to eliminate dependencies between features,thereby capturing the“invariance mechanism”within the data.This approach explores the genuine relationship between code defect features and labels,thereby enhancing defect prediction performance.To evaluate the performance of SDP-SL,this article conducted comparative experiments on 10 open-source projects in the PROMISE dataset.The experimental results demonstrated that in terms of the F-measure,the proposed SDP-SL method outperformed other within-project defect prediction methods by 2.11%-44.03%.In cross-project defect prediction,the SDP-SL method provided an improvement of 5.89%-25.46% in prediction performance compared to other cross-project defect prediction methods.Therefore,SDP-SL can effectively enhance within-and cross-project defect predictions.
基金supported in part by the National Key Research and Development Program of China(2018YFB1700403)the Special Funds for the Construction of an Innovative Province of Hunan(2020GK2028)+1 种基金the National Natural Science Foundation of China(Grant Nos.61872388,62072470)the Natural Science Foundation of Hunan Province(2020JJ4758).
文摘Massive sequence view (MSV) is a classic timeline-based dynamic network visualization approach. However, it is vulnerable to visual clutter caused by overlapping edges, thereby leading to unexpected misunderstanding of time-varying trends of network communications. This study presents a new edge sampling algorithm called edge-based multi-class blue noise (E-MCBN) to reduce visual clutter in MSV. Our main idea is inspired by the multi-class blue noise (MCBN) sampling algorithm, commonly used in multi-class scatterplot decluttering. First, we take a node pair as an edge class, which can be regarded as an analogy to classes in multi-class scatterplots. Second, we propose two indicators, namely, class overlap and inter-class conflict degrees, to measure the overlapping degree and mutual exclusion, respectively, between edge classes. These indicators help construct the foundation of migrating the MCBN sampling from multi-class scatterplots to dynamic network samplings. Finally, we propose three strategies to accelerate MCBN sampling and a partitioning strategy to preserve local high-density edges in the MSV. The result shows that our approach can effectively reduce visual clutters and improve the readability of MSV. Moreover, our approach can also overcome the disadvantages of the MCBN sampling (i.e., long-running and failure to preserve local high-density communication areas in MSV). This study is the first that introduces MCBN sampling into a dynamic network sampling.