期刊文献+
共找到2篇文章
< 1 >
每页显示 20 50 100
Aye-aye middle finger kinematic modeling and motion tracking during tap-scanning
1
作者 Nihar Masurkar Jiming Kang +1 位作者 Hamidreza Nemati ehsan dehghan-niri 《Biomimetic Intelligence & Robotics》 EI 2023年第4期88-96,共9页
The aye-aye(Daubentonia madagascariensis)is a nocturnal lemur native to the island of Madagascar with a unique thin middle finger.Its slender third digit has a remarkably specific adaptation,allowing them to perform t... The aye-aye(Daubentonia madagascariensis)is a nocturnal lemur native to the island of Madagascar with a unique thin middle finger.Its slender third digit has a remarkably specific adaptation,allowing them to perform tap-scanning to locate small cavities beneath tree bark and extract wood-boring larvae from it.As an exceptional active acoustic actuator,this finger makes an aye-aye’s biological system an attractive model for pioneering Nondestructive Evaluation(NDE)methods and robotic systems.Despite the important aspects of the topic in the aye-aye’s unique foraging and its potential contribution to the engineering sensory,little is known about the mechanism and dynamics of this unique finger.This paper used a motion-tracking approach for the aye-aye’s middle finger using simultaneous video graphic capture.To mimic the motion,a two-link robot arm model is designed to reproduce the trajectory.Kinematics formulations were proposed to derive the motion of the middle finger using the Lagrangian method.In addition,a hardware model was developed to simulate the aye-aye’s finger motion.To validate the model,different motion states such as trajectory paths and joint angles,were compared.The simulation results indicate the kinematics of the model were consistent with the actual finger movement.This model is used to understand the aye-aye’s unique tap-scanning process for pioneering new tap-testing NDE strategies for various inspection applications. 展开更多
关键词 Aye-aye KINEMATICS The lagrangian method Motion tracking Tap-scanning
原文传递
Non-destructive thermal imaging for object detection via advanced deep learning for robotic inspection and harvesting of chili peppers 被引量:1
2
作者 Steven C.Hespeler Hamidreza Nemati ehsan dehghan-niri 《Artificial Intelligence in Agriculture》 2021年第1期102-117,共16页
Deep Learning has been utilized in computer vision for object detection for almost a decade.Real-time object detection for robotic inspection and harvesting has gained interest during this time as a possible technique... Deep Learning has been utilized in computer vision for object detection for almost a decade.Real-time object detection for robotic inspection and harvesting has gained interest during this time as a possible technique for highqualitymachine assistance during agriculture applications.We utilize RGB and thermal images of chili peppers in an environment of various amounts of debris,pepper overlapping,and ambient lighting,train this dataset,and compare object detection methods.Results are presented from the real-time and less than real-time object detection models.Two advanced deep learning algorithms,Mask-Regional Convolutional Neural Networks(Mask-RCNN)and You Only Look Once version 3(YOLOv3)are compared in terms of object detection accuracy and computational costs.When utilizing the YOLOv3 architecture,an overall training mean average precision(mAP)value of 1.0 is achieved.Most testing images from this model score within a range from 97 to 100%confidence levels in natural environment.It is shown that the YOLOv3 algorithm has superior capabilities to the Mask-RCNNwith over 10 times the computational speed on the chili dataset.However,some of the RGB test images resulted in lowclassification scoreswhen heavy debris is present in the image.A significant improvement in the real-time classification scores was observed when the thermal images were used,especially with heavy debris present.We found and report improved prediction scores with a thermal imagery dataset where YOLOv3 struggled on the RGB images.It was shown that mapping temperature differences between the pepper and plant/debris can provide significant features for object detection in real-time and can help improve accuracy of predictionswith heavy debris,variant ambient lighting,and overlapping of peppers.In addition,successful thermal imaging for real-time robotic harvesting could allow the harvesting period to become more efficient and open up harvesting opportunity in low light situations. 展开更多
关键词 Deep learning You only look once(YOLO)v3 Object detection Chili pepper fruit
原文传递
上一页 1 下一页 到第
使用帮助 返回顶部