As a branch of computer science,information visualization aims to help users understand and analyze complex data through graphical interfaces and interactive technologies.Information visualization primarily includes v...As a branch of computer science,information visualization aims to help users understand and analyze complex data through graphical interfaces and interactive technologies.Information visualization primarily includes various visual structures such as time-series structures,spatial relationship structures,statistical distribution structures,and geographic map structures,each with unique functions and application scenarios.To better explain the cognitive process of visualization,researchers have proposed various cognitive models based on interaction mechanisms,visual perception steps,and novice use of visualization.These models help understand user cognition in information visualization,enhancing the effectiveness of data analysis and decision-making.展开更多
Background With the rapid development of Web3D technologies, the online Web3D visualization, particularly for complex models or scenes, has been in a great demand. Owing to the major conflict between the Web3D system ...Background With the rapid development of Web3D technologies, the online Web3D visualization, particularly for complex models or scenes, has been in a great demand. Owing to the major conflict between the Web3D system load and resource consumption in the processing of these huge models, the huge 3D model lightweighting methods for online Web3D visualization are reviewed in this paper. Methods By observing the geometry redundancy introduced by man-made operations in the modeling procedure, several categories of light-weighting related work that aim at reducing the amount of data and resource consumption are elaborated for Web3D visualization. Results By comparing perspectives, the characteristics of each method are summarized, and among the reviewed methods, the geometric redundancy removal that achieves the lightweight goal by detecting and removing the repeated components is an appropriate method for current online Web3D visualization. Meanwhile, the learning algorithm, still in improvement period at present, is our expected future research topic. Conclusions Various aspects should be considered in an efficient lightweight method for online Web3D visualization, such as characteristics of original data, combination or extension of existing methods, scheduling strategy, cache man-agement, and rendering mechanism. Meanwhile, innovation methods, particularly the learning algorithm, are worth exploring.展开更多
Artificial Intelligence(AI)and Computer Vision(CV)advancements have led to many useful methodologies in recent years,particularly to help visually-challenged people.Object detection includes a variety of challenges,fo...Artificial Intelligence(AI)and Computer Vision(CV)advancements have led to many useful methodologies in recent years,particularly to help visually-challenged people.Object detection includes a variety of challenges,for example,handlingmultiple class images,images that get augmented when captured by a camera and so on.The test images include all these variants as well.These detection models alert them about their surroundings when they want to walk independently.This study compares four CNN-based pre-trainedmodels:ResidualNetwork(ResNet-50),Inception v3,DenseConvolutional Network(DenseNet-121),and SqueezeNet,predominantly used in image recognition applications.Based on the analysis performed on these test images,the study infers that Inception V3 outperformed other pre-trained models in terms of accuracy and speed.To further improve the performance of the Inception v3 model,the thermal exchange optimization(TEO)algorithm is applied to tune the hyperparameters(number of epochs,batch size,and learning rate)showing the novelty of the work.Better accuracy was achieved owing to the inclusion of an auxiliary classifier as a regularizer,hyperparameter optimizer,and factorization approach.Additionally,Inception V3 can handle images of different sizes.This makes Inception V3 the optimum model for assisting visually challenged people in real-world communication when integrated with Internet of Things(IoT)-based devices.展开更多
The rapid evolution of wireless communication technologies has underscored the critical role of antennas in ensuring seamless connectivity.Antenna defects,ranging from manufacturing imperfections to environmental wear...The rapid evolution of wireless communication technologies has underscored the critical role of antennas in ensuring seamless connectivity.Antenna defects,ranging from manufacturing imperfections to environmental wear,pose significant challenges to the reliability and performance of communication systems.This review paper navigates the landscape of antenna defect detection,emphasizing the need for a nuanced understanding of various defect types and the associated challenges in visual detection.This review paper serves as a valuable resource for researchers,engineers,and practitioners engaged in the design and maintenance of communication systems.The insights presented here pave the way for enhanced reliability in antenna systems through targeted defect detection measures.In this study,a comprehensive literature analysis on computer vision algorithms that are employed in end-of-line visual inspection of antenna parts is presented.The PRISMA principles will be followed throughout the review,and its goals are to provide a summary of recent research,identify relevant computer vision techniques,and evaluate how effective these techniques are in discovering defects during inspections.It contains articles from scholarly journals as well as papers presented at conferences up until June 2023.This research utilized search phrases that were relevant,and papers were chosen based on whether or not they met certain inclusion and exclusion criteria.In this study,several different computer vision approaches,such as feature extraction and defect classification,are broken down and analyzed.Additionally,their applicability and performance are discussed.The review highlights the significance of utilizing a wide variety of datasets and measurement criteria.The findings of this study add to the existing body of knowledge and point researchers in the direction of promising new areas of investigation,such as real-time inspection systems and multispectral imaging.This review,on its whole,offers a complete study of computer vision approaches for quality control in antenna parts.It does so by providing helpful insights and drawing attention to areas that require additional exploration.展开更多
Data acquisition and modeling are the two important, difficult and costful aspects in a Cybercity project. 2D-GIS is mature and can manage a lot of spatial data. Thus 3D-GIS should make the best of data and technology...Data acquisition and modeling are the two important, difficult and costful aspects in a Cybercity project. 2D-GIS is mature and can manage a lot of spatial data. Thus 3D-GIS should make the best of data and technology of 2D-GIS. Construction of a useful synthetic environment requires integration of multiple types of information like DEM, texture images and 3D representation of objects such as buildings. In this paper, the method for 3D city landscape data model and visualization based on integrated databases is presented. Since the data volume of raster are very huge, special strategies(for example, pyramid gridded method) must be adopted in order to manage raster data efficiently. Three different methods of data acquisition, the proper data structure and a simple modeling method are presented as well. At last, a pilot project of Shanghai Cybercity is illustrated.展开更多
The mechanisms of seismically-induced liquefaction of granular soils underhigh confining stresses are still not fully understood. Evaluation of these mechanisms is generallybased on extrapolation of observed behavior ...The mechanisms of seismically-induced liquefaction of granular soils underhigh confining stresses are still not fully understood. Evaluation of these mechanisms is generallybased on extrapolation of observed behavior at shallow depths. Three centrifuge model tests wereconducted at RPI's experimental facility to investigate the effects of confining stresses on thedynamic response of a deep horizontal deposit of saturated sand. Liquefaction was observed at highconfining stresses in each of the tests. A system identification procedure was used to estimate theassociated shear strain and stress time histories. These histories revealed a response marked byshear strength degradation and dilative patterns. The recorded accelerations and pore pressures wereemployed to generate visual animations of the models. These visualizations revealed a liquefactionfront traveling downward and leading to large shear strains and isolation of upper soil layers.展开更多
Digital mine is the inevitable outcome of the information processing, and is also a complicated system engineering. Firstly, for the 3D visualization application of the digital mine, the ground and underground integra...Digital mine is the inevitable outcome of the information processing, and is also a complicated system engineering. Firstly, for the 3D visualization application of the digital mine, the ground and underground integrative visualization framework model was proposed based on the mine entity database. So, the visualization problem was availably resolved, as well as the professional analytical ability was improved. Secondly, aiming at the irregularities, non-uniformity, dynamics of mine entities, mix modeling method based on the entity character was put forward, in which 3D expression of mine entities was realized. Lastly, the 3D visualization project for a copper mine was experimentally studied. Satisfactory results were acquired, and the rationality of visualization model and feasibility of 3D modeling were validated.展开更多
Traditional vehicle detection algorithms use traverse search based vehicle candidate generation and hand crafted based classifier training for vehicle candidate verification.These types of methods generally have high ...Traditional vehicle detection algorithms use traverse search based vehicle candidate generation and hand crafted based classifier training for vehicle candidate verification.These types of methods generally have high processing times and low vehicle detection performance.To address this issue,a visual saliency and deep sparse convolution hierarchical model based vehicle detection algorithm is proposed.A visual saliency calculation is firstly used to generate a small vehicle candidate area.The vehicle candidate sub images are then loaded into a sparse deep convolution hierarchical model with an SVM-based classifier to perform the final detection.The experimental results demonstrate that the proposed method is with 94.81% correct rate and 0.78% false detection rate on the existing datasets and the real road pictures captured by our group,which outperforms the existing state-of-the-art algorithms.More importantly,high discriminative multi-scale features are generated by deep sparse convolution network which has broad application prospects in target recognition in the field of intelligent vehicle.展开更多
Structure of porous media and fluid distribution in rocks can significantly affect the transport characteristics during the process of microscale tracer flow.To clarify the effect of micro heterogeneity on aqueous tra...Structure of porous media and fluid distribution in rocks can significantly affect the transport characteristics during the process of microscale tracer flow.To clarify the effect of micro heterogeneity on aqueous tracer transport,this paper demonstrates microscopic experiments at pore level and proposes an improved mathematical model for tracer transport.The visualization results show a faster tracer movement into movable water than it into bound water,and quicker occupancy in flowing pores than in storage pores caused by the difference of tracer velocity.Moreover,the proposed mathematical model includes the effects of bound water and flowing porosity by applying interstitial flow velocity expression.The new model also distinguishes flowing and storage pores,accounting for different tracer transport mechanisms(dispersion,diffusion and adsorption)in different types of pores.The resulting analytical solution better matches with tracer production data than the standard model.The residual sum of squares(RSS)from the new model is 0.0005,which is 100 times smaller than the RSS from the standard model.The sensitivity analysis indicates that the dispersion coefficient and flowing porosity shows a negative correlation with the tracer breakthrough time and the increasing slope,whereas the superficial velocity and bound water saturation show a positive correlation.展开更多
A novel approach to compute the high frequency radar cross-section (RCS) of complex targets is described in this paper.From the three views or the sectional views of the target, target is geometrically modeled by non-...A novel approach to compute the high frequency radar cross-section (RCS) of complex targets is described in this paper.From the three views or the sectional views of the target, target is geometrically modeled by non-uniform rational B-spline (NURBS) parametric surfaces using the software CNFEOV developed by oneself which constructs NURBS representation of complex target from engineering orthographic views. RCS is obtained through PO, PTD, MEC and IBC techniques. When calculating RCS of the target, it is necessary to get the unit normal vector to surface illumi- nated by radar and the value Z which is the distance from the point on the surface to radar. ln this novel approach, the unit normal vector to the surface can be obtained either by the Phong rendering model, in which the color components (RGB) of every pixel on the image are equal to the coordinate components of the normal, or by the NURBS expressions. The value Z can be achieved by software or hardware Z-buffer. The effects of the size of image on the RCS of target are discussed and the correct method is recommended. The RCS of the perfect conducting sphere, cylinder and dihedral as well as the coated cylinder, as some examples, are computed. The accuracy of the method is verified by comparing the numerical results with those obtained by using other methods.展开更多
To solve the unbalanced data problems of learning models for semantic concepts, an optimized modeling method based on the posterior probability support vector machine (PPSVM) is presented. A neighborbased posterior ...To solve the unbalanced data problems of learning models for semantic concepts, an optimized modeling method based on the posterior probability support vector machine (PPSVM) is presented. A neighborbased posterior probability estimator for visual concepts is provided. The proposed method has been applied in a high-level visual semantic concept classification system and the experiment results show that it results in enhanced performance over the baseline SVM models, as well as in improved robustness with respect to high-level visual semantic concept classification.展开更多
This study focuses on meeting the challenges of big data visualization by using of data reduction methods based the feature selection methods.To reduce the volume of big data and minimize model training time(Tt)while ...This study focuses on meeting the challenges of big data visualization by using of data reduction methods based the feature selection methods.To reduce the volume of big data and minimize model training time(Tt)while maintaining data quality.We contributed to meeting the challenges of big data visualization using the embedded method based“Select from model(SFM)”method by using“Random forest Importance algorithm(RFI)”and comparing it with the filter method by using“Select percentile(SP)”method based chi square“Chi2”tool for selecting the most important features,which are then fed into a classification process using the logistic regression(LR)algorithm and the k-nearest neighbor(KNN)algorithm.Thus,the classification accuracy(AC)performance of LRis also compared to theKNN approach in python on eight data sets to see which method produces the best rating when feature selection methods are applied.Consequently,the study concluded that the feature selection methods have a significant impact on the analysis and visualization of the data after removing the repetitive data and the data that do not affect the goal.After making several comparisons,the study suggests(SFMLR)using SFM based on RFI algorithm for feature selection,with LR algorithm for data classify.The proposal proved its efficacy by comparing its results with recent literature.展开更多
With the social development, we are stepping into an information technology world. In such a world, our life is getting more and more diversified and rich because of e-business. E-business not only provides us conveni...With the social development, we are stepping into an information technology world. In such a world, our life is getting more and more diversified and rich because of e-business. E-business not only provides us convenience but also large amounts of business data. However, how shall we better store, manage and use these business data has become a major field being studied by e-business. With the rapid growth of data volume, the relational database system cannot meet the requirements of the current status. In this paper, focusing on the visualized analysis model of Hadoop business data, it analyzed the business data in terms of the visualized platform, database and analysis model etc. Depending on the analysis, offline-data analysis and data visualization for Hive database will be greatly improved, so that references and suggestions can be provided for the visualized analysis model of Hadoop business data.展开更多
A Robust Adaptive Video Encoder (RAVE) based on human visual model is proposed. The encoder combines the best features of Fine Granularity Scalable (FGS) coding, framedropping coding, video redundancy coding, and huma...A Robust Adaptive Video Encoder (RAVE) based on human visual model is proposed. The encoder combines the best features of Fine Granularity Scalable (FGS) coding, framedropping coding, video redundancy coding, and human visual model. According to packet loss and available bandwidth of the network, the encoder adjust the output bit rate by jointly adapting quantization step-size instructed by human visual model, rate shaping, and periodically inserting key frame. The proposed encoder is implemented based on MPEG-4 encoder and is compared with the case of a conventional FGS algorithm. It is shown that RAVE is a very efficient robust video encoder that provides improved visual quality for the receiver and consumes equal or less network resource. Results are confirmed by subjective tests and simulation tests.展开更多
In order to simplify the three-dimensional building group model, this paper proposes a clustering generalization method based on visual cognitive theory. The method uses road elements to roughly divide scenes, and the...In order to simplify the three-dimensional building group model, this paper proposes a clustering generalization method based on visual cognitive theory. The method uses road elements to roughly divide scenes, and then uses spatial cognitive elements such as direction, area, height and their topological constraints to classify them precisely, so as to make them conform to the urban morphological characteristics. Delaunay triangulation network and boundary tracking synthesis algorithm are used to merge and summarize the models, and the models are stored hierarchically. The proposed algorithm should be verified experimentally with a typical urban complex model. The experimental results show that the efficiency of the method used in this paper is at least 20% higher than that of previous one, and with the growth of test data, the higher efficiency is improved. The classification results conform to human cognitive habits, and the generalization levels of different models can be relatively unified by adaptive control of each threshold in the clustering generalization process.展开更多
文摘As a branch of computer science,information visualization aims to help users understand and analyze complex data through graphical interfaces and interactive technologies.Information visualization primarily includes various visual structures such as time-series structures,spatial relationship structures,statistical distribution structures,and geographic map structures,each with unique functions and application scenarios.To better explain the cognitive process of visualization,researchers have proposed various cognitive models based on interaction mechanisms,visual perception steps,and novice use of visualization.These models help understand user cognition in information visualization,enhancing the effectiveness of data analysis and decision-making.
文摘Background With the rapid development of Web3D technologies, the online Web3D visualization, particularly for complex models or scenes, has been in a great demand. Owing to the major conflict between the Web3D system load and resource consumption in the processing of these huge models, the huge 3D model lightweighting methods for online Web3D visualization are reviewed in this paper. Methods By observing the geometry redundancy introduced by man-made operations in the modeling procedure, several categories of light-weighting related work that aim at reducing the amount of data and resource consumption are elaborated for Web3D visualization. Results By comparing perspectives, the characteristics of each method are summarized, and among the reviewed methods, the geometric redundancy removal that achieves the lightweight goal by detecting and removing the repeated components is an appropriate method for current online Web3D visualization. Meanwhile, the learning algorithm, still in improvement period at present, is our expected future research topic. Conclusions Various aspects should be considered in an efficient lightweight method for online Web3D visualization, such as characteristics of original data, combination or extension of existing methods, scheduling strategy, cache man-agement, and rendering mechanism. Meanwhile, innovation methods, particularly the learning algorithm, are worth exploring.
基金Princess Nourah bint Abdulrahman University Researchers Supporting Project number(PNURSP2023R191)Princess Nourah bint Abdulrahman University,Riyadh,Saudi Arabia.The authors would like to thank the Deanship of Scientific Research at Umm Al-Qura University for supporting this work by Grant Code:(22UQU4310373DSR61)This study is supported via funding from Prince Sattam bin Abdulaziz University project number(PSAU/2023/R/1444).
文摘Artificial Intelligence(AI)and Computer Vision(CV)advancements have led to many useful methodologies in recent years,particularly to help visually-challenged people.Object detection includes a variety of challenges,for example,handlingmultiple class images,images that get augmented when captured by a camera and so on.The test images include all these variants as well.These detection models alert them about their surroundings when they want to walk independently.This study compares four CNN-based pre-trainedmodels:ResidualNetwork(ResNet-50),Inception v3,DenseConvolutional Network(DenseNet-121),and SqueezeNet,predominantly used in image recognition applications.Based on the analysis performed on these test images,the study infers that Inception V3 outperformed other pre-trained models in terms of accuracy and speed.To further improve the performance of the Inception v3 model,the thermal exchange optimization(TEO)algorithm is applied to tune the hyperparameters(number of epochs,batch size,and learning rate)showing the novelty of the work.Better accuracy was achieved owing to the inclusion of an auxiliary classifier as a regularizer,hyperparameter optimizer,and factorization approach.Additionally,Inception V3 can handle images of different sizes.This makes Inception V3 the optimum model for assisting visually challenged people in real-world communication when integrated with Internet of Things(IoT)-based devices.
文摘The rapid evolution of wireless communication technologies has underscored the critical role of antennas in ensuring seamless connectivity.Antenna defects,ranging from manufacturing imperfections to environmental wear,pose significant challenges to the reliability and performance of communication systems.This review paper navigates the landscape of antenna defect detection,emphasizing the need for a nuanced understanding of various defect types and the associated challenges in visual detection.This review paper serves as a valuable resource for researchers,engineers,and practitioners engaged in the design and maintenance of communication systems.The insights presented here pave the way for enhanced reliability in antenna systems through targeted defect detection measures.In this study,a comprehensive literature analysis on computer vision algorithms that are employed in end-of-line visual inspection of antenna parts is presented.The PRISMA principles will be followed throughout the review,and its goals are to provide a summary of recent research,identify relevant computer vision techniques,and evaluate how effective these techniques are in discovering defects during inspections.It contains articles from scholarly journals as well as papers presented at conferences up until June 2023.This research utilized search phrases that were relevant,and papers were chosen based on whether or not they met certain inclusion and exclusion criteria.In this study,several different computer vision approaches,such as feature extraction and defect classification,are broken down and analyzed.Additionally,their applicability and performance are discussed.The review highlights the significance of utilizing a wide variety of datasets and measurement criteria.The findings of this study add to the existing body of knowledge and point researchers in the direction of promising new areas of investigation,such as real-time inspection systems and multispectral imaging.This review,on its whole,offers a complete study of computer vision approaches for quality control in antenna parts.It does so by providing helpful insights and drawing attention to areas that require additional exploration.
文摘Data acquisition and modeling are the two important, difficult and costful aspects in a Cybercity project. 2D-GIS is mature and can manage a lot of spatial data. Thus 3D-GIS should make the best of data and technology of 2D-GIS. Construction of a useful synthetic environment requires integration of multiple types of information like DEM, texture images and 3D representation of objects such as buildings. In this paper, the method for 3D city landscape data model and visualization based on integrated databases is presented. Since the data volume of raster are very huge, special strategies(for example, pyramid gridded method) must be adopted in order to manage raster data efficiently. Three different methods of data acquisition, the proper data structure and a simple modeling method are presented as well. At last, a pilot project of Shanghai Cybercity is illustrated.
基金This research was supported by the National Science Foundation,Grant No.CMS-984754(Dr.C.Astill program manager)the US Army Engineer Research and Development Center.
文摘The mechanisms of seismically-induced liquefaction of granular soils underhigh confining stresses are still not fully understood. Evaluation of these mechanisms is generallybased on extrapolation of observed behavior at shallow depths. Three centrifuge model tests wereconducted at RPI's experimental facility to investigate the effects of confining stresses on thedynamic response of a deep horizontal deposit of saturated sand. Liquefaction was observed at highconfining stresses in each of the tests. A system identification procedure was used to estimate theassociated shear strain and stress time histories. These histories revealed a response marked byshear strength degradation and dilative patterns. The recorded accelerations and pore pressures wereemployed to generate visual animations of the models. These visualizations revealed a liquefactionfront traveling downward and leading to large shear strains and isolation of upper soil layers.
基金Project(41061043)supported by the National Natural Science Foundation of China
文摘Digital mine is the inevitable outcome of the information processing, and is also a complicated system engineering. Firstly, for the 3D visualization application of the digital mine, the ground and underground integrative visualization framework model was proposed based on the mine entity database. So, the visualization problem was availably resolved, as well as the professional analytical ability was improved. Secondly, aiming at the irregularities, non-uniformity, dynamics of mine entities, mix modeling method based on the entity character was put forward, in which 3D expression of mine entities was realized. Lastly, the 3D visualization project for a copper mine was experimentally studied. Satisfactory results were acquired, and the rationality of visualization model and feasibility of 3D modeling were validated.
基金Supported by National Natural Science Foundation of China(Grant Nos.U1564201,61573171,61403172,51305167)China Postdoctoral Science Foundation(Grant Nos.2015T80511,2014M561592)+3 种基金Jiangsu Provincial Natural Science Foundation of China(Grant No.BK20140555)Six Talent Peaks Project of Jiangsu Province,China(Grant Nos.2015-JXQC-012,2014-DZXX-040)Jiangsu Postdoctoral Science Foundation,China(Grant No.1402097C)Jiangsu University Scientific Research Foundation for Senior Professionals,China(Grant No.14JDG028)
文摘Traditional vehicle detection algorithms use traverse search based vehicle candidate generation and hand crafted based classifier training for vehicle candidate verification.These types of methods generally have high processing times and low vehicle detection performance.To address this issue,a visual saliency and deep sparse convolution hierarchical model based vehicle detection algorithm is proposed.A visual saliency calculation is firstly used to generate a small vehicle candidate area.The vehicle candidate sub images are then loaded into a sparse deep convolution hierarchical model with an SVM-based classifier to perform the final detection.The experimental results demonstrate that the proposed method is with 94.81% correct rate and 0.78% false detection rate on the existing datasets and the real road pictures captured by our group,which outperforms the existing state-of-the-art algorithms.More importantly,high discriminative multi-scale features are generated by deep sparse convolution network which has broad application prospects in target recognition in the field of intelligent vehicle.
基金funded by National Science and Technology Major Projects(2017ZX05009004,2016ZX05058003)Beijing Natural Science Foundation(2173061)and State Energy Center for Shale Oil Research and Development(G5800-16-ZS-KFNY005).
文摘Structure of porous media and fluid distribution in rocks can significantly affect the transport characteristics during the process of microscale tracer flow.To clarify the effect of micro heterogeneity on aqueous tracer transport,this paper demonstrates microscopic experiments at pore level and proposes an improved mathematical model for tracer transport.The visualization results show a faster tracer movement into movable water than it into bound water,and quicker occupancy in flowing pores than in storage pores caused by the difference of tracer velocity.Moreover,the proposed mathematical model includes the effects of bound water and flowing porosity by applying interstitial flow velocity expression.The new model also distinguishes flowing and storage pores,accounting for different tracer transport mechanisms(dispersion,diffusion and adsorption)in different types of pores.The resulting analytical solution better matches with tracer production data than the standard model.The residual sum of squares(RSS)from the new model is 0.0005,which is 100 times smaller than the RSS from the standard model.The sensitivity analysis indicates that the dispersion coefficient and flowing porosity shows a negative correlation with the tracer breakthrough time and the increasing slope,whereas the superficial velocity and bound water saturation show a positive correlation.
文摘A novel approach to compute the high frequency radar cross-section (RCS) of complex targets is described in this paper.From the three views or the sectional views of the target, target is geometrically modeled by non-uniform rational B-spline (NURBS) parametric surfaces using the software CNFEOV developed by oneself which constructs NURBS representation of complex target from engineering orthographic views. RCS is obtained through PO, PTD, MEC and IBC techniques. When calculating RCS of the target, it is necessary to get the unit normal vector to surface illumi- nated by radar and the value Z which is the distance from the point on the surface to radar. ln this novel approach, the unit normal vector to the surface can be obtained either by the Phong rendering model, in which the color components (RGB) of every pixel on the image are equal to the coordinate components of the normal, or by the NURBS expressions. The value Z can be achieved by software or hardware Z-buffer. The effects of the size of image on the RCS of target are discussed and the correct method is recommended. The RCS of the perfect conducting sphere, cylinder and dihedral as well as the coated cylinder, as some examples, are computed. The accuracy of the method is verified by comparing the numerical results with those obtained by using other methods.
基金Sponsored by the Beijing Municipal Natural Science Foundation(4082027)
文摘To solve the unbalanced data problems of learning models for semantic concepts, an optimized modeling method based on the posterior probability support vector machine (PPSVM) is presented. A neighborbased posterior probability estimator for visual concepts is provided. The proposed method has been applied in a high-level visual semantic concept classification system and the experiment results show that it results in enhanced performance over the baseline SVM models, as well as in improved robustness with respect to high-level visual semantic concept classification.
基金Supported by National Key Technology R&D Program(2006BAK04B04) National Natural Science Foundation of China(50604003) Specialized Research Fund for the Doctoral Program of Higher Education(2006008005)
文摘This study focuses on meeting the challenges of big data visualization by using of data reduction methods based the feature selection methods.To reduce the volume of big data and minimize model training time(Tt)while maintaining data quality.We contributed to meeting the challenges of big data visualization using the embedded method based“Select from model(SFM)”method by using“Random forest Importance algorithm(RFI)”and comparing it with the filter method by using“Select percentile(SP)”method based chi square“Chi2”tool for selecting the most important features,which are then fed into a classification process using the logistic regression(LR)algorithm and the k-nearest neighbor(KNN)algorithm.Thus,the classification accuracy(AC)performance of LRis also compared to theKNN approach in python on eight data sets to see which method produces the best rating when feature selection methods are applied.Consequently,the study concluded that the feature selection methods have a significant impact on the analysis and visualization of the data after removing the repetitive data and the data that do not affect the goal.After making several comparisons,the study suggests(SFMLR)using SFM based on RFI algorithm for feature selection,with LR algorithm for data classify.The proposal proved its efficacy by comparing its results with recent literature.
文摘With the social development, we are stepping into an information technology world. In such a world, our life is getting more and more diversified and rich because of e-business. E-business not only provides us convenience but also large amounts of business data. However, how shall we better store, manage and use these business data has become a major field being studied by e-business. With the rapid growth of data volume, the relational database system cannot meet the requirements of the current status. In this paper, focusing on the visualized analysis model of Hadoop business data, it analyzed the business data in terms of the visualized platform, database and analysis model etc. Depending on the analysis, offline-data analysis and data visualization for Hive database will be greatly improved, so that references and suggestions can be provided for the visualized analysis model of Hadoop business data.
基金Supported by Innovation Fund of China(00C26224210641)
文摘A Robust Adaptive Video Encoder (RAVE) based on human visual model is proposed. The encoder combines the best features of Fine Granularity Scalable (FGS) coding, framedropping coding, video redundancy coding, and human visual model. According to packet loss and available bandwidth of the network, the encoder adjust the output bit rate by jointly adapting quantization step-size instructed by human visual model, rate shaping, and periodically inserting key frame. The proposed encoder is implemented based on MPEG-4 encoder and is compared with the case of a conventional FGS algorithm. It is shown that RAVE is a very efficient robust video encoder that provides improved visual quality for the receiver and consumes equal or less network resource. Results are confirmed by subjective tests and simulation tests.
文摘In order to simplify the three-dimensional building group model, this paper proposes a clustering generalization method based on visual cognitive theory. The method uses road elements to roughly divide scenes, and then uses spatial cognitive elements such as direction, area, height and their topological constraints to classify them precisely, so as to make them conform to the urban morphological characteristics. Delaunay triangulation network and boundary tracking synthesis algorithm are used to merge and summarize the models, and the models are stored hierarchically. The proposed algorithm should be verified experimentally with a typical urban complex model. The experimental results show that the efficiency of the method used in this paper is at least 20% higher than that of previous one, and with the growth of test data, the higher efficiency is improved. The classification results conform to human cognitive habits, and the generalization levels of different models can be relatively unified by adaptive control of each threshold in the clustering generalization process.