Background In this study,we propose a novel 3D scene graph prediction approach for scene understanding from point clouds.Methods It can automatically organize the entities of a scene in a graph,where objects are nodes...Background In this study,we propose a novel 3D scene graph prediction approach for scene understanding from point clouds.Methods It can automatically organize the entities of a scene in a graph,where objects are nodes and their relationships are modeled as edges.More specifically,we employ the DGCNN to capture the features of objects and their relationships in the scene.A Graph Attention Network(GAT)is introduced to exploit latent features obtained from the initial estimation to further refine the object arrangement in the graph structure.A one loss function modified from cross entropy with a variable weight is proposed to solve the multi-category problem in the prediction of object and predicate.Results Experiments reveal that the proposed approach performs favorably against the state-of-the-art methods in terms of predicate classification and relationship prediction and achieves comparable performance on object classification prediction.Conclusions The 3D scene graph prediction approach can form an abstract description of the scene space from point clouds.展开更多
As neural radiance fields continue to advance in 3D content representation,the copyright issues surrounding 3D models oriented towards implicit representation become increasingly pressing.In response to this challenge...As neural radiance fields continue to advance in 3D content representation,the copyright issues surrounding 3D models oriented towards implicit representation become increasingly pressing.In response to this challenge,this paper treats the embedding and extraction of neural radiance field watermarks as inverse problems of image transformations and proposes a scheme for protecting neural radiance field copyrights using invertible neural network watermarking.Leveraging 2D image watermarking technology for 3D scene protection,the scheme embeds watermarks within the training images of neural radiance fields through the forward process in invertible neural networks and extracts them from images rendered by neural radiance fields through the reverse process,thereby ensuring copyright protection for both the neural radiance fields and associated 3D scenes.However,challenges such as information loss during rendering processes and deliberate tampering necessitate the design of an image quality enhancement module to increase the scheme’s robustness.This module restores distorted images through neural network processing before watermark extraction.Additionally,embedding watermarks in each training image enables watermark information extraction from multiple viewpoints.Our proposed watermarking method achieves a PSNR(Peak Signal-to-Noise Ratio)value exceeding 37 dB for images containing watermarks and 22 dB for recovered watermarked images,as evaluated on the Lego,Hotdog,and Chair datasets,respectively.These results demonstrate the efficacy of our scheme in enhancing copyright protection.展开更多
In this paper, we present a framework for the generation and control of an Internet-based 3-dimensional game virtual environment that allows a character to navigate through the environment. Our framework includes 3-di...In this paper, we present a framework for the generation and control of an Internet-based 3-dimensional game virtual environment that allows a character to navigate through the environment. Our framework includes 3-dimensional terrain mesh data processing, a map editor, scene processing, collision processing, and walkthrough control. We also define an environment-specific semantic information editor, which can be applied using specific location obtained from the real world. Users can insert text information related to the characters real position in the real world during navigation in the game virtual environment.展开更多
Virtual Reality provides a new approach for geographical research. In this paper, a framework of the Virtual Huanghe (Yellow) River System was first presented from the view of technology, which included five main mo...Virtual Reality provides a new approach for geographical research. In this paper, a framework of the Virtual Huanghe (Yellow) River System was first presented from the view of technology, which included five main modules——data sources, 3D simulation terrain database, 3D simulation model database, 3D simulation implementation and application system. Then the key technoiogies of constructing Virtual Huanghe River System were discussed in detail: 1) OpenGL technology, the 3D graphics developing instrument, was employed in Virtual Huanghe River System to realize the function of dynamic real-time navigation. 2) MO and OpenGL technologies were used to make the mutual response between 3D scene and 2D electronic map available, which made use of the advantages of both 3D scene and 2D electronic map, with the macroscopic view, integrality and conciseness of 2D electronic map combined with the locality, reality and visualization of 3D scene. At the same time the disadvantages of abstract and ambiguity of 2D electronic map and the direction losing of virtual navigation in 3D scene were overcome.展开更多
To simulate the satellite launch mission under a general platform which could be used in a full-digital mode as well as in a semi-physical way, is an important way to certify the mission design performance as well as ...To simulate the satellite launch mission under a general platform which could be used in a full-digital mode as well as in a semi-physical way, is an important way to certify the mission design performance as well as technical feasibilities, and it relates to complex system simulation methods such as multi-disciplinary coupling, multi-language modeling as well as interactive simulation and virtual simulation technologies. This paper introduces the design of a digital simulation platform for satellite launch mission verification.The platform has the advantages of high generality and extensibility, being easy to build up. The Functional Mockup Interface(FMI) Standard is adopted to achieve integration of multi-source models. A WebGL based 3D visual simulation tool is also adopted to implement the virtual display system which could display the rocket launch process and rocket-satellite separation with high fidelity 3D virtual scenes. A configuration tool was developed to map the 3D objects in the visual scene with simulation physical variables for complex rocket flight control mechanisms, which greatly improves the platform's generality and extensibility. Finally the real-time performance had been tested and the YL-1 launch mission was adopted to demonstrate the functions of the platform.The platform will be used to construct a digital twin system for satellite launch missions in the future.展开更多
3D scene modeling has long been a fundamental problem in computer graphics and computer vision. With the popularity of consumer-level RGB-D cameras,there is a growing interest in digitizing real-world indoor 3D scenes...3D scene modeling has long been a fundamental problem in computer graphics and computer vision. With the popularity of consumer-level RGB-D cameras,there is a growing interest in digitizing real-world indoor 3D scenes. However,modeling indoor3 D scenes remains a challenging problem because of the complex structure of interior objects and poor quality of RGB-D data acquired by consumer-level sensors.Various methods have been proposed to tackle these challenges. In this survey,we provide an overview of recent advances in indoor scene modeling techniques,as well as public datasets and code libraries which can facilitate experiments and evaluation.展开更多
We present SinGRAV, an attempt to learn a generative radiance volume from multi-view observations of a single natural scene, in stark contrast to existing category-level 3D generative models that learn from images of ...We present SinGRAV, an attempt to learn a generative radiance volume from multi-view observations of a single natural scene, in stark contrast to existing category-level 3D generative models that learn from images of many object-centric scenes. Inspired by SinGAN, we also learn the internal distribution of the input scene, which necessitates our key designs w.r.t. the scene representation and network architecture. Unlike popular multi-layer perceptrons (MLP)-based architectures, we particularly employ convolutional generators and discriminators, which inherently possess spatial locality bias, to operate over voxelized volumes for learning the internal distribution over a plethora of overlapping regions. On the other hand, localizing the adversarial generators and discriminators over confined areas with limited receptive fields easily leads to highly implausible geometric structures in the spatial. Our remedy is to use spatial inductive bias and joint discrimination on geometric clues in the form of 2D depth maps. This strategy is effective in improving spatial arrangement while incurring negligible additional computational cost. Experimental results demonstrate the ability of SinGRAV in generating plausible and diverse variations from a single scene, the merits of SinGRAV over state-of-the-art generative neural scene models, and the versatility of SinGRAV by its use in a variety of applications. Code and data will be released to facilitate further research.展开更多
We introduce a novel end-to-end deeplearning solution for rapidly estimating a dense spherical depth map of an indoor environment.Our input is a single equirectangular image registered with a sparse depth map,as provi...We introduce a novel end-to-end deeplearning solution for rapidly estimating a dense spherical depth map of an indoor environment.Our input is a single equirectangular image registered with a sparse depth map,as provided by a variety of common capture setups.Depth is inferred by an efficient and lightweight single-branch network,which employs a dynamic gating system to process together dense visual data and sparse geometric data.We exploit the characteristics of typical man-made environments to efficiently compress multiresolution features and find short-and long-range relations among scene parts.Furthermore,we introduce a new augmentation strategy to make the model robust to different types of sparsity,including those generated by various structured light sensors and LiDAR setups.The experimental results demonstrate that our method provides interactive performance and outperforms stateof-the-art solutions in computational efficiency,adaptivity to variable depth sparsity patterns,and prediction accuracy for challenging indoor data,even when trained solely on synthetic data without any fine tuning.展开更多
Reconstructing dynamic scenes with commodity depth cameras has many applications in computer graphics,computer vision,and robotics.However,due to the presence of noise and erroneous observations from data capturing de...Reconstructing dynamic scenes with commodity depth cameras has many applications in computer graphics,computer vision,and robotics.However,due to the presence of noise and erroneous observations from data capturing devices and the inherently ill-posed nature of non-rigid registration with insufficient information,traditional approaches often produce low-quality geometry with holes,bumps,and misalignments.We propose a novel 3D dynamic reconstruction system,named HDR-Net-Fusion,which learns to simultaneously reconstruct and refine the geometry on the fly with a sparse embedded deformation graph of surfels,using a hierarchical deep reinforcement(HDR)network.The latter comprises two parts:a global HDR-Net which rapidly detects local regions with large geometric errors,and a local HDR-Net serving as a local patch refinement operator to promptly complete and enhance such regions.Training the global HDR-Net is formulated as a novel reinforcement learning problem to implicitly learn the region selection strategy with the goal of improving the overall reconstruction quality.The applicability and efficiency of our approach are demonstrated using a large-scale dynamic reconstruction dataset.Our method can reconstruct geometry with higher quality than traditional methods.展开更多
With the support of edge computing,the synergy and collaboration among central cloud,edge cloud,and terminal devices form an integrated computing ecosystem known as the cloud-edge-client architecture.This integration ...With the support of edge computing,the synergy and collaboration among central cloud,edge cloud,and terminal devices form an integrated computing ecosystem known as the cloud-edge-client architecture.This integration unlocks the value of data and computational power,presenting significant opportunities for large-scale 3D scene modeling and XR presentation.In this paper,we explore the perspectives and highlight new challenges in 3D scene modeling and XR presentation based on point cloud within the cloud-edge-client integrated architecture.We also propose a novel cloud-edge-client integrated technology framework and a demonstration of municipal governance application to address these challenges.展开更多
In this paper,we propose a Structure-Aware Fusion Network(SAFNet)for 3D scene understanding.As 2D images present more detailed information while 3D point clouds convey more geometric information,fusing the two complem...In this paper,we propose a Structure-Aware Fusion Network(SAFNet)for 3D scene understanding.As 2D images present more detailed information while 3D point clouds convey more geometric information,fusing the two complementary data can improve the discriminative ability of the model.Fusion is a very challenging task since 2D and 3D data are essentially different and show different formats.The existing methods first extract 2D multi-view image features and then aggregate them into sparse 3D point clouds and achieve superior performance.However,the existing methods ignore the structural relations between pixels and point clouds and directly fuse the two modals of data without adaptation.To address this,we propose a structural deep metric learning method on pixels and points to explore the relations and further utilize them to adaptively map the images and point clouds into a common canonical space for prediction.Extensive experiments on the widely used ScanNetV2 and S3DIS datasets verify the performance of the proposed SAFNet.展开更多
As an important technology of digital construction,real 3D models can improve the immersion and realism of virtual reality(VR)scenes.The large amount of data for real 3D scenes requires more effective rendering method...As an important technology of digital construction,real 3D models can improve the immersion and realism of virtual reality(VR)scenes.The large amount of data for real 3D scenes requires more effective rendering methods,but the current rendering optimization methods have some defects and cannot render real 3D scenes in virtual reality.In this study,the location of the viewing frustum is predicted by a Kalman filter,and eye-tracking equipment is used to recognize the region of interest(ROI)in the scene.Finally,the real 3D model of interest in the predicted frustum is rendered first.The experimental results show that the method of this study can predict the frustrum location approximately 200 ms in advance,the prediction accuracy is approximately 87%,the scene rendering efficiency is improved by 8.3%,and the motion sickness is reduced by approximately 54.5%.These studies help promote the use of real 3D models in virtual reality and ROI recognition methods.In future work,we will further improve the prediction accuracy of viewing frustums in virtual reality and the application of eye tracking in virtual geographic scenes.展开更多
in this poper a novel data-and rule-driven system for 3D scene description and segmentation inan unknown environment is presented.This system generatss hierachies of features that correspond tostructural elements such...in this poper a novel data-and rule-driven system for 3D scene description and segmentation inan unknown environment is presented.This system generatss hierachies of features that correspond tostructural elements such as boundaries and shape classes of individual object as well as relationshipsbetween objects.It is implemented as an added high-level component to an existing low-level binocularvision system[1]. Based on a pair of matched stereo images produced by that system,3D segmentation is firstperformed to group object boundary data into several edge-sets,each of which is believed to belong to aparticular object.Then gross features of each object are extracted and stored in an object recbrd.The finalstructural description of the scene is accomplished with information in the object record,a set of rules and arule implementor. The System is designed to handle partially occluded objects of different shapes and sizeson the 2D imager.Experimental results have shown its success in computing both object and structurallevel descriptions of common man-made objects.展开更多
基金Supported by National Natural Science Foundation of China(61872024)National Key R&D Program of China under Grant(2018YFB2100603).
文摘Background In this study,we propose a novel 3D scene graph prediction approach for scene understanding from point clouds.Methods It can automatically organize the entities of a scene in a graph,where objects are nodes and their relationships are modeled as edges.More specifically,we employ the DGCNN to capture the features of objects and their relationships in the scene.A Graph Attention Network(GAT)is introduced to exploit latent features obtained from the initial estimation to further refine the object arrangement in the graph structure.A one loss function modified from cross entropy with a variable weight is proposed to solve the multi-category problem in the prediction of object and predicate.Results Experiments reveal that the proposed approach performs favorably against the state-of-the-art methods in terms of predicate classification and relationship prediction and achieves comparable performance on object classification prediction.Conclusions The 3D scene graph prediction approach can form an abstract description of the scene space from point clouds.
基金supported by the National Natural Science Foundation of China,with Fund Numbers 62272478,62102451the National Defense Science and Technology Independent Research Project(Intelligent Information Hiding Technology and Its Applications in a Certain Field)and Science and Technology Innovation Team Innovative Research Project Research on Key Technologies for Intelligent Information Hiding”with Fund Number ZZKY20222102.
文摘As neural radiance fields continue to advance in 3D content representation,the copyright issues surrounding 3D models oriented towards implicit representation become increasingly pressing.In response to this challenge,this paper treats the embedding and extraction of neural radiance field watermarks as inverse problems of image transformations and proposes a scheme for protecting neural radiance field copyrights using invertible neural network watermarking.Leveraging 2D image watermarking technology for 3D scene protection,the scheme embeds watermarks within the training images of neural radiance fields through the forward process in invertible neural networks and extracts them from images rendered by neural radiance fields through the reverse process,thereby ensuring copyright protection for both the neural radiance fields and associated 3D scenes.However,challenges such as information loss during rendering processes and deliberate tampering necessitate the design of an image quality enhancement module to increase the scheme’s robustness.This module restores distorted images through neural network processing before watermark extraction.Additionally,embedding watermarks in each training image enables watermark information extraction from multiple viewpoints.Our proposed watermarking method achieves a PSNR(Peak Signal-to-Noise Ratio)value exceeding 37 dB for images containing watermarks and 22 dB for recovered watermarked images,as evaluated on the Lego,Hotdog,and Chair datasets,respectively.These results demonstrate the efficacy of our scheme in enhancing copyright protection.
文摘In this paper, we present a framework for the generation and control of an Internet-based 3-dimensional game virtual environment that allows a character to navigate through the environment. Our framework includes 3-dimensional terrain mesh data processing, a map editor, scene processing, collision processing, and walkthrough control. We also define an environment-specific semantic information editor, which can be applied using specific location obtained from the real world. Users can insert text information related to the characters real position in the real world during navigation in the game virtual environment.
基金Under the auspices of the Science Data Sharing Pilot Project of Ministry of Science and Technology of China (No. 2003DEA2C010), Natural Science Fund of Henan University on Virtual City Construction Method (No. 04YBRW026)
文摘Virtual Reality provides a new approach for geographical research. In this paper, a framework of the Virtual Huanghe (Yellow) River System was first presented from the view of technology, which included five main modules——data sources, 3D simulation terrain database, 3D simulation model database, 3D simulation implementation and application system. Then the key technoiogies of constructing Virtual Huanghe River System were discussed in detail: 1) OpenGL technology, the 3D graphics developing instrument, was employed in Virtual Huanghe River System to realize the function of dynamic real-time navigation. 2) MO and OpenGL technologies were used to make the mutual response between 3D scene and 2D electronic map available, which made use of the advantages of both 3D scene and 2D electronic map, with the macroscopic view, integrality and conciseness of 2D electronic map combined with the locality, reality and visualization of 3D scene. At the same time the disadvantages of abstract and ambiguity of 2D electronic map and the direction losing of virtual navigation in 3D scene were overcome.
文摘To simulate the satellite launch mission under a general platform which could be used in a full-digital mode as well as in a semi-physical way, is an important way to certify the mission design performance as well as technical feasibilities, and it relates to complex system simulation methods such as multi-disciplinary coupling, multi-language modeling as well as interactive simulation and virtual simulation technologies. This paper introduces the design of a digital simulation platform for satellite launch mission verification.The platform has the advantages of high generality and extensibility, being easy to build up. The Functional Mockup Interface(FMI) Standard is adopted to achieve integration of multi-source models. A WebGL based 3D visual simulation tool is also adopted to implement the virtual display system which could display the rocket launch process and rocket-satellite separation with high fidelity 3D virtual scenes. A configuration tool was developed to map the 3D objects in the visual scene with simulation physical variables for complex rocket flight control mechanisms, which greatly improves the platform's generality and extensibility. Finally the real-time performance had been tested and the YL-1 launch mission was adopted to demonstrate the functions of the platform.The platform will be used to construct a digital twin system for satellite launch missions in the future.
基金supported by the National Natural Science Foundation of China(Project No.61120106007)Research Grant of Beijing Higher Institution Engineering Research CenterTsinghua University Initiative Scientific Research Program
文摘3D scene modeling has long been a fundamental problem in computer graphics and computer vision. With the popularity of consumer-level RGB-D cameras,there is a growing interest in digitizing real-world indoor 3D scenes. However,modeling indoor3 D scenes remains a challenging problem because of the complex structure of interior objects and poor quality of RGB-D data acquired by consumer-level sensors.Various methods have been proposed to tackle these challenges. In this survey,we provide an overview of recent advances in indoor scene modeling techniques,as well as public datasets and code libraries which can facilitate experiments and evaluation.
基金supported by the International(Regional)Cooperation and Exchange Program of National Natural Science Foundation of China under Grant No.62161146002the Shenzhen Collaborative Innovation Program under Grant No.CJGJZD2021048092601003.
文摘We present SinGRAV, an attempt to learn a generative radiance volume from multi-view observations of a single natural scene, in stark contrast to existing category-level 3D generative models that learn from images of many object-centric scenes. Inspired by SinGAN, we also learn the internal distribution of the input scene, which necessitates our key designs w.r.t. the scene representation and network architecture. Unlike popular multi-layer perceptrons (MLP)-based architectures, we particularly employ convolutional generators and discriminators, which inherently possess spatial locality bias, to operate over voxelized volumes for learning the internal distribution over a plethora of overlapping regions. On the other hand, localizing the adversarial generators and discriminators over confined areas with limited receptive fields easily leads to highly implausible geometric structures in the spatial. Our remedy is to use spatial inductive bias and joint discrimination on geometric clues in the form of 2D depth maps. This strategy is effective in improving spatial arrangement while incurring negligible additional computational cost. Experimental results demonstrate the ability of SinGRAV in generating plausible and diverse variations from a single scene, the merits of SinGRAV over state-of-the-art generative neural scene models, and the versatility of SinGRAV by its use in a variety of applications. Code and data will be released to facilitate further research.
基金funding from the Autonomous Region of Sardinia under project XDATA.Eva Almansa,Armando Sanchez,Giorgio Vassena,and Enrico Gobbetti received funding from the European Union's H2020 research and innovation programme under grant 813170(EVOCATION).
文摘We introduce a novel end-to-end deeplearning solution for rapidly estimating a dense spherical depth map of an indoor environment.Our input is a single equirectangular image registered with a sparse depth map,as provided by a variety of common capture setups.Depth is inferred by an efficient and lightweight single-branch network,which employs a dynamic gating system to process together dense visual data and sparse geometric data.We exploit the characteristics of typical man-made environments to efficiently compress multiresolution features and find short-and long-range relations among scene parts.Furthermore,we introduce a new augmentation strategy to make the model robust to different types of sparsity,including those generated by various structured light sensors and LiDAR setups.The experimental results demonstrate that our method provides interactive performance and outperforms stateof-the-art solutions in computational efficiency,adaptivity to variable depth sparsity patterns,and prediction accuracy for challenging indoor data,even when trained solely on synthetic data without any fine tuning.
基金This work was supported by the National Natural Science Foundation of China(Grant Nos.61902210 and 61521002).
文摘Reconstructing dynamic scenes with commodity depth cameras has many applications in computer graphics,computer vision,and robotics.However,due to the presence of noise and erroneous observations from data capturing devices and the inherently ill-posed nature of non-rigid registration with insufficient information,traditional approaches often produce low-quality geometry with holes,bumps,and misalignments.We propose a novel 3D dynamic reconstruction system,named HDR-Net-Fusion,which learns to simultaneously reconstruct and refine the geometry on the fly with a sparse embedded deformation graph of surfels,using a hierarchical deep reinforcement(HDR)network.The latter comprises two parts:a global HDR-Net which rapidly detects local regions with large geometric errors,and a local HDR-Net serving as a local patch refinement operator to promptly complete and enhance such regions.Training the global HDR-Net is formulated as a novel reinforcement learning problem to implicitly learn the region selection strategy with the goal of improving the overall reconstruction quality.The applicability and efficiency of our approach are demonstrated using a large-scale dynamic reconstruction dataset.Our method can reconstruct geometry with higher quality than traditional methods.
基金the National Natural Science Foundation of China(U22B2034)the Fundamental Research Funds for the Central Universities(226-2022-00064).
文摘With the support of edge computing,the synergy and collaboration among central cloud,edge cloud,and terminal devices form an integrated computing ecosystem known as the cloud-edge-client architecture.This integration unlocks the value of data and computational power,presenting significant opportunities for large-scale 3D scene modeling and XR presentation.In this paper,we explore the perspectives and highlight new challenges in 3D scene modeling and XR presentation based on point cloud within the cloud-edge-client integrated architecture.We also propose a novel cloud-edge-client integrated technology framework and a demonstration of municipal governance application to address these challenges.
基金supported by the National Natural Science Foundation of China(No.61976023)。
文摘In this paper,we propose a Structure-Aware Fusion Network(SAFNet)for 3D scene understanding.As 2D images present more detailed information while 3D point clouds convey more geometric information,fusing the two complementary data can improve the discriminative ability of the model.Fusion is a very challenging task since 2D and 3D data are essentially different and show different formats.The existing methods first extract 2D multi-view image features and then aggregate them into sparse 3D point clouds and achieve superior performance.However,the existing methods ignore the structural relations between pixels and point clouds and directly fuse the two modals of data without adaptation.To address this,we propose a structural deep metric learning method on pixels and points to explore the relations and further utilize them to adaptively map the images and point clouds into a common canonical space for prediction.Extensive experiments on the widely used ScanNetV2 and S3DIS datasets verify the performance of the proposed SAFNet.
基金supported by the National Natural Science Foundation of China(grant numbers U2034202,41871289,42171397)the Sichuan Science and Technology Program(grant number 2020JDTD0003).
文摘As an important technology of digital construction,real 3D models can improve the immersion and realism of virtual reality(VR)scenes.The large amount of data for real 3D scenes requires more effective rendering methods,but the current rendering optimization methods have some defects and cannot render real 3D scenes in virtual reality.In this study,the location of the viewing frustum is predicted by a Kalman filter,and eye-tracking equipment is used to recognize the region of interest(ROI)in the scene.Finally,the real 3D model of interest in the predicted frustum is rendered first.The experimental results show that the method of this study can predict the frustrum location approximately 200 ms in advance,the prediction accuracy is approximately 87%,the scene rendering efficiency is improved by 8.3%,and the motion sickness is reduced by approximately 54.5%.These studies help promote the use of real 3D models in virtual reality and ROI recognition methods.In future work,we will further improve the prediction accuracy of viewing frustums in virtual reality and the application of eye tracking in virtual geographic scenes.
文摘in this poper a novel data-and rule-driven system for 3D scene description and segmentation inan unknown environment is presented.This system generatss hierachies of features that correspond tostructural elements such as boundaries and shape classes of individual object as well as relationshipsbetween objects.It is implemented as an added high-level component to an existing low-level binocularvision system[1]. Based on a pair of matched stereo images produced by that system,3D segmentation is firstperformed to group object boundary data into several edge-sets,each of which is believed to belong to aparticular object.Then gross features of each object are extracted and stored in an object recbrd.The finalstructural description of the scene is accomplished with information in the object record,a set of rules and arule implementor. The System is designed to handle partially occluded objects of different shapes and sizeson the 2D imager.Experimental results have shown its success in computing both object and structurallevel descriptions of common man-made objects.