Light detection and ranging(LiDAR)sensors play a vital role in acquiring 3D point cloud data and extracting valuable information about objects for tasks such as autonomous driving,robotics,and virtual reality(VR).Howe...Light detection and ranging(LiDAR)sensors play a vital role in acquiring 3D point cloud data and extracting valuable information about objects for tasks such as autonomous driving,robotics,and virtual reality(VR).However,the sparse and disordered nature of the 3D point cloud poses significant challenges to feature extraction.Overcoming limitations is critical for 3D point cloud processing.3D point cloud object detection is a very challenging and crucial task,in which point cloud processing and feature extraction methods play a crucial role and have a significant impact on subsequent object detection performance.In this overview of outstanding work in object detection from the 3D point cloud,we specifically focus on summarizing methods employed in 3D point cloud processing.We introduce the way point clouds are processed in classical 3D object detection algorithms,and their improvements to solve the problems existing in point cloud processing.Different voxelization methods and point cloud sampling strategies will influence the extracted features,thereby impacting the final detection performance.展开更多
As 3D acquisition technology develops and 3D sensors become increasingly affordable,large quantities of 3D point cloud data are emerging.How to effectively learn and extract the geometric features from these point clo...As 3D acquisition technology develops and 3D sensors become increasingly affordable,large quantities of 3D point cloud data are emerging.How to effectively learn and extract the geometric features from these point clouds has become an urgent problem to be solved.The point cloud geometric information is hidden in disordered,unstructured points,making point cloud analysis a very challenging problem.To address this problem,we propose a novel network framework,called Tree Graph Network(TGNet),which can sample,group,and aggregate local geometric features.Specifically,we construct a Tree Graph by explicit rules,which consists of curves extending in all directions in point cloud feature space,and then aggregate the features of the graph through a cross-attention mechanism.In this way,we incorporate more point cloud geometric structure information into the representation of local geometric features,which makes our network perform better.Our model performs well on several basic point clouds processing tasks such as classification,segmentation,and normal estimation,demonstrating the effectiveness and superiority of our network.Furthermore,we provide ablation experiments and visualizations to better understand our network.展开更多
To address the current issues of inaccurate segmentation and the limited applicability of segmentation methods for building facades in point clouds, we propose a facade segmentation algorithm based on optimal dual-sca...To address the current issues of inaccurate segmentation and the limited applicability of segmentation methods for building facades in point clouds, we propose a facade segmentation algorithm based on optimal dual-scale feature descriptors. First, we select the optimal dual-scale descriptors from a range of feature descriptors. Next, we segment the facade according to the threshold value of the chosen optimal dual-scale descriptors. Finally, we use RANSAC (Random Sample Consensus) to fit the segmented surface and optimize the fitting result. Experimental results show that, compared to commonly used facade segmentation algorithms, the proposed method yields more accurate segmentation results, providing a robust data foundation for subsequent 3D model reconstruction of buildings.展开更多
In the Mexican Intertropical Convergence Zone, particle size distributions within 500 m of cloud boundaries at altitudes of 1000, 2500, and 4200 m, were compared against size distributions at the same levels but 1500 ...In the Mexican Intertropical Convergence Zone, particle size distributions within 500 m of cloud boundaries at altitudes of 1000, 2500, and 4200 m, were compared against size distributions at the same levels but 1500 m away from the clouds. The differences in the distributions near and far from the cloud are related to processes that may change particle properties inside the cloud. Chemical changes in the aerosols are deduced from the particles' refractive index, as derived from comparisons with the scattering coeflcient measured by a nephelometer. An analysis of ten cloud systems indicates that vertical transport of cloud base aerosol followed by entrainment/detrainment is the cloud processing signature most frequently observed in the comparisons (65%). Changes in the chemical composition are observed in approximately 20% of the cases and another 20% of the cases showed removal by precipitation. About 5% of the comparisons showed clear evidence of changes by coalescence. The principal effect of these cloud-processed aerosols is observed in the increase of optical depth in the layer from 30 m to 4200 m in the near-cloud regions, in comparison with the atmosphere further from clouds.展开更多
High-resolution numerical simulation data of a rainstorm triggering debris flow in Sichuan Province of China simulated by the Weather Research and Forecasting (WRF) Model were used to study the dominant cloud microp...High-resolution numerical simulation data of a rainstorm triggering debris flow in Sichuan Province of China simulated by the Weather Research and Forecasting (WRF) Model were used to study the dominant cloud microphysical processes of the torrential rainfall.The results showed that:(1) In the strong precipitation period,particle sizes of all hydrometeors increased,and mean-mass diameters of graupel increased the most significantly,as compared with those in the weak precipitation period; (2) The terminal velocity of raindrops was the strongest among all hydrometeors,followed by graupel's,which was much smaller than that of raindrops.Differences between various hydrometeors' terminal velocities in the strong precipitation period were larger than those in the weak precipitation period,which favored relative motion,collection interaction and transformation between the particles.Absolute terminal velocity values of raindrops and graupel were significantly greater than those of air upward velocity,and the stronger the precipitation was,the greater the differences between them were; (3) The orders of magnitudes of the various hydrometeors' sources and sinks in the strong precipitation period were larger than those in the weak precipitation period,causing a difference in the intensity of precipitation.Water vapor,cloud water,raindrops,graupel and their exchange processes played a major role in the production of the torrential rainfall,and there were two main processes via which raindrops were generated:abundant water vapor condensed into cloud water and,on the one hand,accretion of cloud water by rain water formed rain water,while on the other hand,accretion of cloud water by graupel formed graupel,and then the melting of graupel formed rain water.展开更多
Due to the increasing number of cloud applications,the amount of data in the cloud shows signs of growing faster than ever before.The nature of cloud computing requires cloud data processing systems that can handle hu...Due to the increasing number of cloud applications,the amount of data in the cloud shows signs of growing faster than ever before.The nature of cloud computing requires cloud data processing systems that can handle huge volumes of data and have high performance.However,most cloud storage systems currently adopt a hash-like approach to retrieving data that only supports simple keyword-based enquiries,but lacks various forms of information search.Therefore,a scalable and efficient indexing scheme is clearly required.In this paper,we present a skip list-based cloud index,called SLC-index,which is a novel,scalable skip list-based indexing for cloud data processing.The SLC-index offers a two-layered architecture for extending indexing scope and facilitating better throughput.Dynamic load-balancing for the SLC-index is achieved by online migration of index nodes between servers.Furthermore,it is a flexible system due to its dynamic addition and removal of servers.The SLC-index is efficient for both point and range queries.Experimental results show the efficiency of the SLC-index and its usefulness as an alternative approach for cloud-suitable data structures.展开更多
Process discovery, as one of the most challenging process analysis techniques, aims to uncover business process models from event logs. Many process discovery approaches were invented in the past twenty years;however,...Process discovery, as one of the most challenging process analysis techniques, aims to uncover business process models from event logs. Many process discovery approaches were invented in the past twenty years;however, most of them have difficulties in handling multi-instance sub-processes. To address this challenge, we first introduce a multi-instance business process model(MBPM) to support the modeling of processes with multiple sub-process instantiations. Formal semantics of MBPMs are precisely defined by using multi-instance Petri nets(MPNs)that are an extension of Petri nets with distinguishable tokens.Then, a novel process discovery technique is developed to support the discovery of MBPMs from event logs with sub-process multi-instantiation information. In addition, we propose to measure the quality of the discovered MBPMs against the input event logs by transforming an MBPM to a classical Petri net such that existing quality metrics, e.g., fitness and precision, can be used.The proposed discovery approach is properly implemented as plugins in the Pro M toolkit. Based on a cloud resource management case study, we compare our approach with the state-of-theart process discovery techniques. The results demonstrate that our approach outperforms existing approaches to discover process models with multi-instance sub-processes.展开更多
Internet of Car, resulting from the Internet of Things, is a key point for the forthcoming smart city. In this article, GPS technology, 3G wireless technology and cloud-processing technology are employed to construct ...Internet of Car, resulting from the Internet of Things, is a key point for the forthcoming smart city. In this article, GPS technology, 3G wireless technology and cloud-processing technology are employed to construct a cloud-processing network platform based on the Internet of Car. By this platform, positions and velocity of the running cars, information of traffic flow from fixed monitoring points and transportation videos are combined to be a virtual traffic flow data platform, which is a parallel system with real traffic flow and is able to supply basic data for analysis and decision of intelligent transportation system.展开更多
Convective processes affect large-scale environments through cloud-radiation interaction, cloud micro- physical processes, and surface rainfall processes. Over the last three decades, cloud-resolving models (CRMs) h...Convective processes affect large-scale environments through cloud-radiation interaction, cloud micro- physical processes, and surface rainfall processes. Over the last three decades, cloud-resolving models (CRMs) have demonstrated to be capable of simulating convective-radiative responses to an imposed large-scale forcing. The CRM-produced cloud and radiative properties have been utilized to study the convective- related processes and their ensemble effects on large-scale circulations. This review the recent progress on the understanding of convective processes with the use of CRM simulations, including precipitation processes; cloud microphysical and radiative processes; dynamical processes; precipitation efficiency; diurnal variations of tropical oceanic convection; local-scale atmosphere-ocean coupling processes; and tropical convective-radiative equilibrium states. Two different ongoing applications of CRMs to general circulation models (GCMs) are discussed: replacing convection and cloud schemes for studying the interaction between cloud systems and large-scale circulation, and improving the schemes for climate simulations.展开更多
This paper shows how a desktop simulation can be migrated into its cloud equivalence using Windows Azure. It is undeniable that simulators are expensive and cost-intensive regarding maintenance and upgrading, and thus...This paper shows how a desktop simulation can be migrated into its cloud equivalence using Windows Azure. It is undeniable that simulators are expensive and cost-intensive regarding maintenance and upgrading, and thus, it is not always feasible to buy such a simulator. Therefore, it will be of great significance if we have an approach, which provides simulators with services through the Internet with the aim of making them accessible from anywhere and at any time. That is, researchers and developers can focus on their actual researches and experiments and the intended output results. The cloud simulation infrastructure of this contribution is capable of hosting different simulations with the ability to be cloned as cloud services. The simulator example used here mimics the process of a distillation column to be seen as a widely used plant in several industrial applications. The cloud simulation core embedded in the cloud environment is fully independent from the developed user-interface of the simulator meaning that the cloud simulator can be connected to any user-interface. This allows simulation users such as process control and alarm management designers to connect to the cloud simulator in order to design, develop and experiment their systems on a “pay-as-you-go” basis as it is the case of most cloud computing services, aimed at providing computing services as utilities like water and electricity. For coding convenience, Windows Azure was selected for both developing the cloud simulation and hosting it in the cloud because of the fact that the source code of the desktop simulator is already available in C# based on dot Net technology. From a software technical point of view, UML graphical notations were applied in order to express the software requirement specifications of the distributed cloud simulation, representing a widespread technology in the object-oriented design and analysis.展开更多
The prevailing idea so far about why the rainfall occurs was that after agglutination of water droplets with condensation nuclei, the size of the particle formed by the condensation nuclei connected with droplets of w...The prevailing idea so far about why the rainfall occurs was that after agglutination of water droplets with condensation nuclei, the size of the particle formed by the condensation nuclei connected with droplets of water increased considerably and caused its fall. This idea has led to numerous scientific publications in which empirical distribution functions of clouds’ water droplets sizes were proposed. Estimates values provided by these empirical distribution functions, in most cases, were validated by comparison with UHF Radar measurements. The condensation nuclei concept has not been sufficiently exploited and this has led meteorologists to error, in their attempt to describe the clouds, thinking that clouds were formed by liquid water droplets. Indeed, MBANE BIOUELE paradox (2005) confirms this embarrassing situation. In fact, when applying Archimedes theorem to a liquid water droplet suspended in the atmosphere, we obtain a meaningless inequality ?which makes believe that the densities of pure water in liquid and solid phases are much lower than that of the atmosphere considered at the sea level. This meaningless inequality is easy to contradict: of course, if you empty a bottle of pure liquid water in the ocean (where z is equal to 0), this water will not remain suspended in the air, i.e., application of Archimedes’ theorem allows realizing that there is no liquid (or solid) water droplet, suspended in the clouds. Indeed, all liquid (or solid) water droplets which are formed in clouds, fall under the effect of gravity and produce rains. This means that our current description of the clouds is totally wrong. In this study, we describe the clouds as a gas composed of dry air and saturated water vapor whose optical properties depend on temperature, i.e., when the temperature of a cloud decreases, the color of this gaseous system tends towards white.展开更多
Cloud Computing as a disruptive technology, provides a dynamic, elastic and promising computing climate to tackle the challenges of big data processing and analytics. Hadoop and MapReduce are the widely used open sour...Cloud Computing as a disruptive technology, provides a dynamic, elastic and promising computing climate to tackle the challenges of big data processing and analytics. Hadoop and MapReduce are the widely used open source frameworks in Cloud Computing for storing and processing big data in the scalable fashion. Spark is the latest parallel computing engine working together with Hadoop that exceeds MapReduce performance via its in-memory computing and high level programming features. In this paper, we present our design and implementation of a productive, domain-specific big data analytics cloud platform on top of Hadoop and Spark. To increase user’s productivity, we created a variety of data processing templates to simplify the programming efforts. We have conducted experiments for its productivity and performance with a few basic but representative data processing algorithms in the petroleum industry. Geophysicists can use the platform to productively design and implement scalable seismic data processing algorithms without handling the details of data management and the complexity of parallelism. The Cloud platform generates a complete data processing application based on user’s kernel program and simple configurations, allocates resources and executes it in parallel on top of Spark and Hadoop.展开更多
The satellite laser ranging (SLR) data quality from the COMPASS was analyzed, and the difference between curve recognition in computer vision and pre-process of SLR data finally proposed a new algorithm for SLR was ...The satellite laser ranging (SLR) data quality from the COMPASS was analyzed, and the difference between curve recognition in computer vision and pre-process of SLR data finally proposed a new algorithm for SLR was discussed data based on curve recognition from points cloud is proposed. The results obtained by the new algorithm are 85 % (or even higher) consistent with that of the screen displaying method, furthermore, the new method can process SLR data automatically, which makes it possible to be used in the development of the COMPASS navigation system.展开更多
In the railway system,fasteners have the functions of damping,maintaining the track distance,and adjusting the track level.Therefore,routine maintenance and inspection of fasteners are important to ensure the safe ope...In the railway system,fasteners have the functions of damping,maintaining the track distance,and adjusting the track level.Therefore,routine maintenance and inspection of fasteners are important to ensure the safe operation of track lines.Currently,assessment methods for fastener tightness include manual observation,acoustic wave detection,and image detection.There are limitations such as low accuracy and efficiency,easy interference and misjudgment,and a lack of accurate,stable,and fast detection methods.Aiming at the small deformation characteristics and large elastic change of fasteners from full loosening to full tightening,this study proposes high-precision surface-structured light technology for fastener detection and fastener deformation feature extraction based on the center-line projection distance and a fastener tightness regression method based on neural networks.First,the method uses a 3D camera to obtain a fastener point cloud and then segments the elastic rod area based on the iterative closest point algorithm registration.Principal component analysis is used to calculate the normal vector of the segmented elastic rod surface and extract the point on the centerline of the elastic rod.The point is projected onto the upper surface of the bolt to calculate the projection distance.Subsequently,the mapping relationship between the projection distance sequence and fastener tightness is established,and the influence of each parameter on the fastener tightness prediction is analyzed.Finally,by setting up a fastener detection scene in the track experimental base,collecting data,and completing the algorithm verification,the results showed that the deviation between the fastener tightness regression value obtained after the algorithm processing and the actual measured value RMSE was 0.2196 mm,which significantly improved the effect compared with other tightness detection methods,and realized an effective fastener tightness regression.展开更多
为解决自动驾驶系统中车辆自主定位与导航无法准确估计车身位姿及导航路径不够平滑等问题,提出一种基于先验激光雷达点云地图的定位与导航方法。利用点云分割技术分离出可行区域以及潜在的风险源,研究基于优化收敛流程的NDT(Normal Dist...为解决自动驾驶系统中车辆自主定位与导航无法准确估计车身位姿及导航路径不够平滑等问题,提出一种基于先验激光雷达点云地图的定位与导航方法。利用点云分割技术分离出可行区域以及潜在的风险源,研究基于优化收敛流程的NDT(Normal Distribution Transform)点云配准定位方法,并对传统A*算法从动态权重设计和扩展领域优先搜索策略两方面进行改进,以适应自动驾驶的实时定位与导航需要。实验采用百度Apollo自动驾驶开发套件(D-KIT)进行多组对照实验,在体素降采样Leafsize参数为1(高采样)、1.2(中采样)与1.5(低采样)时,定位耗时分别降低了27.77%,38.75%和38.30%。选取四组符合实际驾驶需求情况进行导航实验,改进后导航路径最大曲率分别降低了80.9%,74.9%,65%,69.5%,导航过程路径曲率保持较低且稳定平滑,曲率数据符合车辆动力学。为车辆定位与高精度导航提供有效方法。展开更多
文摘Light detection and ranging(LiDAR)sensors play a vital role in acquiring 3D point cloud data and extracting valuable information about objects for tasks such as autonomous driving,robotics,and virtual reality(VR).However,the sparse and disordered nature of the 3D point cloud poses significant challenges to feature extraction.Overcoming limitations is critical for 3D point cloud processing.3D point cloud object detection is a very challenging and crucial task,in which point cloud processing and feature extraction methods play a crucial role and have a significant impact on subsequent object detection performance.In this overview of outstanding work in object detection from the 3D point cloud,we specifically focus on summarizing methods employed in 3D point cloud processing.We introduce the way point clouds are processed in classical 3D object detection algorithms,and their improvements to solve the problems existing in point cloud processing.Different voxelization methods and point cloud sampling strategies will influence the extracted features,thereby impacting the final detection performance.
基金supported by the National Natural Science Foundation of China (Grant Nos.91948203,52075532).
文摘As 3D acquisition technology develops and 3D sensors become increasingly affordable,large quantities of 3D point cloud data are emerging.How to effectively learn and extract the geometric features from these point clouds has become an urgent problem to be solved.The point cloud geometric information is hidden in disordered,unstructured points,making point cloud analysis a very challenging problem.To address this problem,we propose a novel network framework,called Tree Graph Network(TGNet),which can sample,group,and aggregate local geometric features.Specifically,we construct a Tree Graph by explicit rules,which consists of curves extending in all directions in point cloud feature space,and then aggregate the features of the graph through a cross-attention mechanism.In this way,we incorporate more point cloud geometric structure information into the representation of local geometric features,which makes our network perform better.Our model performs well on several basic point clouds processing tasks such as classification,segmentation,and normal estimation,demonstrating the effectiveness and superiority of our network.Furthermore,we provide ablation experiments and visualizations to better understand our network.
文摘To address the current issues of inaccurate segmentation and the limited applicability of segmentation methods for building facades in point clouds, we propose a facade segmentation algorithm based on optimal dual-scale feature descriptors. First, we select the optimal dual-scale descriptors from a range of feature descriptors. Next, we segment the facade according to the threshold value of the chosen optimal dual-scale descriptors. Finally, we use RANSAC (Random Sample Consensus) to fit the segmented surface and optimize the fitting result. Experimental results show that, compared to commonly used facade segmentation algorithms, the proposed method yields more accurate segmentation results, providing a robust data foundation for subsequent 3D model reconstruction of buildings.
文摘In the Mexican Intertropical Convergence Zone, particle size distributions within 500 m of cloud boundaries at altitudes of 1000, 2500, and 4200 m, were compared against size distributions at the same levels but 1500 m away from the clouds. The differences in the distributions near and far from the cloud are related to processes that may change particle properties inside the cloud. Chemical changes in the aerosols are deduced from the particles' refractive index, as derived from comparisons with the scattering coeflcient measured by a nephelometer. An analysis of ten cloud systems indicates that vertical transport of cloud base aerosol followed by entrainment/detrainment is the cloud processing signature most frequently observed in the comparisons (65%). Changes in the chemical composition are observed in approximately 20% of the cases and another 20% of the cases showed removal by precipitation. About 5% of the comparisons showed clear evidence of changes by coalescence. The principal effect of these cloud-processed aerosols is observed in the increase of optical depth in the layer from 30 m to 4200 m in the near-cloud regions, in comparison with the atmosphere further from clouds.
基金supported by the Key Research Program of the Chinese Academy of Sciences (Grant No. KZZD-EW-05-01)the National Basic Research Program of China (973 Program) (Grant No. 2014CB441402)
文摘High-resolution numerical simulation data of a rainstorm triggering debris flow in Sichuan Province of China simulated by the Weather Research and Forecasting (WRF) Model were used to study the dominant cloud microphysical processes of the torrential rainfall.The results showed that:(1) In the strong precipitation period,particle sizes of all hydrometeors increased,and mean-mass diameters of graupel increased the most significantly,as compared with those in the weak precipitation period; (2) The terminal velocity of raindrops was the strongest among all hydrometeors,followed by graupel's,which was much smaller than that of raindrops.Differences between various hydrometeors' terminal velocities in the strong precipitation period were larger than those in the weak precipitation period,which favored relative motion,collection interaction and transformation between the particles.Absolute terminal velocity values of raindrops and graupel were significantly greater than those of air upward velocity,and the stronger the precipitation was,the greater the differences between them were; (3) The orders of magnitudes of the various hydrometeors' sources and sinks in the strong precipitation period were larger than those in the weak precipitation period,causing a difference in the intensity of precipitation.Water vapor,cloud water,raindrops,graupel and their exchange processes played a major role in the production of the torrential rainfall,and there were two main processes via which raindrops were generated:abundant water vapor condensed into cloud water and,on the one hand,accretion of cloud water by rain water formed rain water,while on the other hand,accretion of cloud water by graupel formed graupel,and then the melting of graupel formed rain water.
基金Projects(61363021,61540061,61663047)supported by the National Natural Science Foundation of ChinaProject(2017SE206)supported by the Open Foundation of Key Laboratory in Software Engineering of Yunnan Province,China
文摘Due to the increasing number of cloud applications,the amount of data in the cloud shows signs of growing faster than ever before.The nature of cloud computing requires cloud data processing systems that can handle huge volumes of data and have high performance.However,most cloud storage systems currently adopt a hash-like approach to retrieving data that only supports simple keyword-based enquiries,but lacks various forms of information search.Therefore,a scalable and efficient indexing scheme is clearly required.In this paper,we present a skip list-based cloud index,called SLC-index,which is a novel,scalable skip list-based indexing for cloud data processing.The SLC-index offers a two-layered architecture for extending indexing scope and facilitating better throughput.Dynamic load-balancing for the SLC-index is achieved by online migration of index nodes between servers.Furthermore,it is a flexible system due to its dynamic addition and removal of servers.The SLC-index is efficient for both point and range queries.Experimental results show the efficiency of the SLC-index and its usefulness as an alternative approach for cloud-suitable data structures.
基金supported by the National Natural Science Foundation of China(61902222)the Taishan Scholars Program of Shandong Province(tsqn201909109)+1 种基金the Natural Science Excellent Youth Foundation of Shandong Province(ZR2021YQ45)the Youth Innovation Science and Technology Team Foundation of Shandong Higher School(2021KJ031)。
文摘Process discovery, as one of the most challenging process analysis techniques, aims to uncover business process models from event logs. Many process discovery approaches were invented in the past twenty years;however, most of them have difficulties in handling multi-instance sub-processes. To address this challenge, we first introduce a multi-instance business process model(MBPM) to support the modeling of processes with multiple sub-process instantiations. Formal semantics of MBPMs are precisely defined by using multi-instance Petri nets(MPNs)that are an extension of Petri nets with distinguishable tokens.Then, a novel process discovery technique is developed to support the discovery of MBPMs from event logs with sub-process multi-instantiation information. In addition, we propose to measure the quality of the discovered MBPMs against the input event logs by transforming an MBPM to a classical Petri net such that existing quality metrics, e.g., fitness and precision, can be used.The proposed discovery approach is properly implemented as plugins in the Pro M toolkit. Based on a cloud resource management case study, we compare our approach with the state-of-theart process discovery techniques. The results demonstrate that our approach outperforms existing approaches to discover process models with multi-instance sub-processes.
基金supported by National Basic Research Program of China (973 Program) 2012CB821200 (2012CB821206)National Natural Science Foundation under Grant No. 61170113, No.91024001, No.61070142+1 种基金Beijing Natural Science Foundation(No.4111002)KM201010011006, PHR201008242
文摘Internet of Car, resulting from the Internet of Things, is a key point for the forthcoming smart city. In this article, GPS technology, 3G wireless technology and cloud-processing technology are employed to construct a cloud-processing network platform based on the Internet of Car. By this platform, positions and velocity of the running cars, information of traffic flow from fixed monitoring points and transportation videos are combined to be a virtual traffic flow data platform, which is a parallel system with real traffic flow and is able to supply basic data for analysis and decision of intelligent transportation system.
文摘Convective processes affect large-scale environments through cloud-radiation interaction, cloud micro- physical processes, and surface rainfall processes. Over the last three decades, cloud-resolving models (CRMs) have demonstrated to be capable of simulating convective-radiative responses to an imposed large-scale forcing. The CRM-produced cloud and radiative properties have been utilized to study the convective- related processes and their ensemble effects on large-scale circulations. This review the recent progress on the understanding of convective processes with the use of CRM simulations, including precipitation processes; cloud microphysical and radiative processes; dynamical processes; precipitation efficiency; diurnal variations of tropical oceanic convection; local-scale atmosphere-ocean coupling processes; and tropical convective-radiative equilibrium states. Two different ongoing applications of CRMs to general circulation models (GCMs) are discussed: replacing convection and cloud schemes for studying the interaction between cloud systems and large-scale circulation, and improving the schemes for climate simulations.
文摘This paper shows how a desktop simulation can be migrated into its cloud equivalence using Windows Azure. It is undeniable that simulators are expensive and cost-intensive regarding maintenance and upgrading, and thus, it is not always feasible to buy such a simulator. Therefore, it will be of great significance if we have an approach, which provides simulators with services through the Internet with the aim of making them accessible from anywhere and at any time. That is, researchers and developers can focus on their actual researches and experiments and the intended output results. The cloud simulation infrastructure of this contribution is capable of hosting different simulations with the ability to be cloned as cloud services. The simulator example used here mimics the process of a distillation column to be seen as a widely used plant in several industrial applications. The cloud simulation core embedded in the cloud environment is fully independent from the developed user-interface of the simulator meaning that the cloud simulator can be connected to any user-interface. This allows simulation users such as process control and alarm management designers to connect to the cloud simulator in order to design, develop and experiment their systems on a “pay-as-you-go” basis as it is the case of most cloud computing services, aimed at providing computing services as utilities like water and electricity. For coding convenience, Windows Azure was selected for both developing the cloud simulation and hosting it in the cloud because of the fact that the source code of the desktop simulator is already available in C# based on dot Net technology. From a software technical point of view, UML graphical notations were applied in order to express the software requirement specifications of the distributed cloud simulation, representing a widespread technology in the object-oriented design and analysis.
文摘The prevailing idea so far about why the rainfall occurs was that after agglutination of water droplets with condensation nuclei, the size of the particle formed by the condensation nuclei connected with droplets of water increased considerably and caused its fall. This idea has led to numerous scientific publications in which empirical distribution functions of clouds’ water droplets sizes were proposed. Estimates values provided by these empirical distribution functions, in most cases, were validated by comparison with UHF Radar measurements. The condensation nuclei concept has not been sufficiently exploited and this has led meteorologists to error, in their attempt to describe the clouds, thinking that clouds were formed by liquid water droplets. Indeed, MBANE BIOUELE paradox (2005) confirms this embarrassing situation. In fact, when applying Archimedes theorem to a liquid water droplet suspended in the atmosphere, we obtain a meaningless inequality ?which makes believe that the densities of pure water in liquid and solid phases are much lower than that of the atmosphere considered at the sea level. This meaningless inequality is easy to contradict: of course, if you empty a bottle of pure liquid water in the ocean (where z is equal to 0), this water will not remain suspended in the air, i.e., application of Archimedes’ theorem allows realizing that there is no liquid (or solid) water droplet, suspended in the clouds. Indeed, all liquid (or solid) water droplets which are formed in clouds, fall under the effect of gravity and produce rains. This means that our current description of the clouds is totally wrong. In this study, we describe the clouds as a gas composed of dry air and saturated water vapor whose optical properties depend on temperature, i.e., when the temperature of a cloud decreases, the color of this gaseous system tends towards white.
文摘Cloud Computing as a disruptive technology, provides a dynamic, elastic and promising computing climate to tackle the challenges of big data processing and analytics. Hadoop and MapReduce are the widely used open source frameworks in Cloud Computing for storing and processing big data in the scalable fashion. Spark is the latest parallel computing engine working together with Hadoop that exceeds MapReduce performance via its in-memory computing and high level programming features. In this paper, we present our design and implementation of a productive, domain-specific big data analytics cloud platform on top of Hadoop and Spark. To increase user’s productivity, we created a variety of data processing templates to simplify the programming efforts. We have conducted experiments for its productivity and performance with a few basic but representative data processing algorithms in the petroleum industry. Geophysicists can use the platform to productively design and implement scalable seismic data processing algorithms without handling the details of data management and the complexity of parallelism. The Cloud platform generates a complete data processing application based on user’s kernel program and simple configurations, allocates resources and executes it in parallel on top of Spark and Hadoop.
文摘The satellite laser ranging (SLR) data quality from the COMPASS was analyzed, and the difference between curve recognition in computer vision and pre-process of SLR data finally proposed a new algorithm for SLR was discussed data based on curve recognition from points cloud is proposed. The results obtained by the new algorithm are 85 % (or even higher) consistent with that of the screen displaying method, furthermore, the new method can process SLR data automatically, which makes it possible to be used in the development of the COMPASS navigation system.
基金Supported by Fundamental Research Funds for the Central Universities of China(Grant No.2023JBMC014).
文摘In the railway system,fasteners have the functions of damping,maintaining the track distance,and adjusting the track level.Therefore,routine maintenance and inspection of fasteners are important to ensure the safe operation of track lines.Currently,assessment methods for fastener tightness include manual observation,acoustic wave detection,and image detection.There are limitations such as low accuracy and efficiency,easy interference and misjudgment,and a lack of accurate,stable,and fast detection methods.Aiming at the small deformation characteristics and large elastic change of fasteners from full loosening to full tightening,this study proposes high-precision surface-structured light technology for fastener detection and fastener deformation feature extraction based on the center-line projection distance and a fastener tightness regression method based on neural networks.First,the method uses a 3D camera to obtain a fastener point cloud and then segments the elastic rod area based on the iterative closest point algorithm registration.Principal component analysis is used to calculate the normal vector of the segmented elastic rod surface and extract the point on the centerline of the elastic rod.The point is projected onto the upper surface of the bolt to calculate the projection distance.Subsequently,the mapping relationship between the projection distance sequence and fastener tightness is established,and the influence of each parameter on the fastener tightness prediction is analyzed.Finally,by setting up a fastener detection scene in the track experimental base,collecting data,and completing the algorithm verification,the results showed that the deviation between the fastener tightness regression value obtained after the algorithm processing and the actual measured value RMSE was 0.2196 mm,which significantly improved the effect compared with other tightness detection methods,and realized an effective fastener tightness regression.
文摘为解决自动驾驶系统中车辆自主定位与导航无法准确估计车身位姿及导航路径不够平滑等问题,提出一种基于先验激光雷达点云地图的定位与导航方法。利用点云分割技术分离出可行区域以及潜在的风险源,研究基于优化收敛流程的NDT(Normal Distribution Transform)点云配准定位方法,并对传统A*算法从动态权重设计和扩展领域优先搜索策略两方面进行改进,以适应自动驾驶的实时定位与导航需要。实验采用百度Apollo自动驾驶开发套件(D-KIT)进行多组对照实验,在体素降采样Leafsize参数为1(高采样)、1.2(中采样)与1.5(低采样)时,定位耗时分别降低了27.77%,38.75%和38.30%。选取四组符合实际驾驶需求情况进行导航实验,改进后导航路径最大曲率分别降低了80.9%,74.9%,65%,69.5%,导航过程路径曲率保持较低且稳定平滑,曲率数据符合车辆动力学。为车辆定位与高精度导航提供有效方法。