期刊文献+
共找到2篇文章
< 1 >
每页显示 20 50 100
Rock characterization while drilling and application of roof bolter drilling data for evaluation of ground conditions 被引量:5
1
作者 Jamal Rostami Sair Kahraman +1 位作者 Ali Naeimipour Craig Collins 《Journal of Rock Mechanics and Geotechnical Engineering》 SCIE CSCD 2015年第3期273-281,共9页
Despite recent advances in mine health and safety, roof collapse and instabilities are still the leading causes of injury and fatality in underground mining operations. Improving safety and optimum design of ground su... Despite recent advances in mine health and safety, roof collapse and instabilities are still the leading causes of injury and fatality in underground mining operations. Improving safety and optimum design of ground support requires good and reliable ground characterization. While many geophysical methods have been developed for ground characterizations, their accuracy is insufficient for customized ground support design of underground workings. The actual measurements on the samples of the roof and wall strata from the exploration boring are reliable but the related holes are far apart, thus unsuitable for design purposes. The best source of information could be the geological back mapping of the roof and walls, but this is disruptive to mining operations, and provided information is only from rock surface.Interpretation of the data obtained from roof bolt drilling can offer a good and reliable source of information that can be used for ground characterization and ground support design and evaluations. This paper offers a brief review of the mine roof characterization methods, followed by introduction and discussion of the roof characterization methods by instrumented roof bolters. A brief overview of the results of the preliminary study and initial testing on an instrumented drill and summary of the suggested improvements are also discussed. 展开更多
关键词 Roof bolter Rock characterization Three-dimensional(3D) visualization of ground Ground support optimization
下载PDF
GPT-4 enhanced multimodal grounding for autonomous driving:Leveraging cross-modal attention with large language models
2
作者 Haicheng Liao Huanming Shen +4 位作者 Zhenning Li Chengyue Wang Guofa Li Yiming Bie Chengzhong Xu 《Communications in Transportation Research》 2024年第4期5-23,共19页
In the field of autonomous vehicles(AVs),accurately discerning commander intent and executing linguistic commands within a visual context presents a significant challenge.This paper introduces a sophisticated encoder-... In the field of autonomous vehicles(AVs),accurately discerning commander intent and executing linguistic commands within a visual context presents a significant challenge.This paper introduces a sophisticated encoder-decoder framework,developed to address visual grounding in AVs.Our Context-Aware Visual Grounding(CAVG)model is an advanced system that integrates five core encoders—Text,Emotion,Image,Context,and Cross-Modal—with a multimodal decoder.This integration enables the CAVG model to adeptly capture contextual semantics and to learn human emotional features,augmented by state-of-the-art Large Language Models(LLMs)including GPT-4.The architecture of CAVG is reinforced by the implementation of multi-head cross-modal attention mechanisms and a Region-Specific Dynamic(RSD)layer for attention modulation.This architectural design enables the model to efficiently process and interpret a range of cross-modal inputs,yielding a comprehensive understanding of the correlation between verbal commands and corresponding visual scenes.Empirical evaluations on the Talk2Car dataset,a real-world benchmark,demonstrate that CAVG establishes new standards in prediction accuracy and operational efficiency.Notably,the model exhibits exceptional performance even with limited training data,ranging from 50%to 75%of the full dataset.This feature highlights its effectiveness and potential for deployment in practical AV applications.Moreover,CAVG has shown remarkable robustness and adaptability in challenging scenarios,including long-text command interpretation,low-light conditions,ambiguous command contexts,inclement weather conditions,and densely populated urban environments. 展开更多
关键词 Autonomous driving visual grounding Cross-modal attention Large language models Human-machine interaction
原文传递
上一页 1 下一页 到第
使用帮助 返回顶部