文章主要探讨了医院网际互连协议第4版(Internet Protocol version 4,IPv4)网络向网际互连协议第6版(Internet Protocol version 6,IPv6)网络过渡的通信技术应用,重点分析了双栈技术、隧道技术以及转换技术在过渡过程中的应用。通过华为...文章主要探讨了医院网际互连协议第4版(Internet Protocol version 4,IPv4)网络向网际互连协议第6版(Internet Protocol version 6,IPv6)网络过渡的通信技术应用,重点分析了双栈技术、隧道技术以及转换技术在过渡过程中的应用。通过华为eNSP模拟器构建双栈网络环境,并进行多项测试验证其可行性。结果表明,双栈技术能有效支持IPv4和IPv6协议共存,为医院网络现代化提供技术保障。同时,研究强调在过渡过程中需考虑的安全挑战和设备升级问题,提出了具体的实施策略和优化建议。展开更多
The new version of“Dreamlike Lijiang”is now on stage with a brand-new appearance in Guilin,Guangxi,China. The show has been performing successfully over four years so far! As a leading role in the local show market,...The new version of“Dreamlike Lijiang”is now on stage with a brand-new appearance in Guilin,Guangxi,China. The show has been performing successfully over four years so far! As a leading role in the local show market,“Dreamlike Lijiang”has become a successful quintessence during the last four years,with its popularity and the applause from the audiences.As a famous brand in show industry,“展开更多
针对无人机航拍图像目标检测中视野变化大、时空信息复杂等问题,文中基于YOLOv5(You Only Look Once Version5)架构,提出基于图像低维特征融合的航拍小目标检测模型.引入CA(Coordinate Attention),改进MobileNetV3的反转残差块,增加图...针对无人机航拍图像目标检测中视野变化大、时空信息复杂等问题,文中基于YOLOv5(You Only Look Once Version5)架构,提出基于图像低维特征融合的航拍小目标检测模型.引入CA(Coordinate Attention),改进MobileNetV3的反转残差块,增加图像空间维度信息的同时降低模型参数量.改进YOLOv5特征金字塔网络结构,融合浅层网络中的特征图,增加模型对图像低维有效信息的表达能力,进而提升小目标检测精度.同时为了降低航拍图像中复杂背景带来的干扰,引入无参平均注意力模块,同时关注图像的空间注意力与通道注意力;引入VariFocal Loss,降低负样本在训练过程中的权重占比.在VisDrone数据集上的实验验证文中模型的有效性,该模型在有效提升检测精度的同时明显降低复杂度.展开更多
Sea cucumber detection is widely recognized as the key to automatic culture.The underwater light environment is complex and easily obscured by mud,sand,reefs,and other underwater organisms.To date,research on sea cucu...Sea cucumber detection is widely recognized as the key to automatic culture.The underwater light environment is complex and easily obscured by mud,sand,reefs,and other underwater organisms.To date,research on sea cucumber detection has mostly concentrated on the distinction between prospective objects and the background.However,the key to proper distinction is the effective extraction of sea cucumber feature information.In this study,the edge-enhanced scaling You Only Look Once-v4(YOLOv4)(ESYv4)was proposed for sea cucumber detection.By emphasizing the target features in a way that reduced the impact of different hues and brightness values underwater on the misjudgment of sea cucumbers,a bidirectional cascade network(BDCN)was used to extract the overall edge greyscale image in the image and add up the original RGB image as the detected input.Meanwhile,the YOLOv4 model for backbone detection is scaled,and the number of parameters is reduced to 48%of the original number of parameters.Validation results of 783images indicated that the detection precision of positive sea cucumber samples reached 0.941.This improvement reflects that the algorithm is more effective to improve the edge feature information of the target.It thus contributes to the automatic multi-objective detection of underwater sea cucumbers.展开更多
为了提高交通标志识别的速度和精度,提出了一种采用Yolov4(You only look once version4)深度学习框架的交通标志识别方法,并将该方法与SSD(single shot multi box detector)和Yolov3(You only look once version 3)算法进行对比,所提...为了提高交通标志识别的速度和精度,提出了一种采用Yolov4(You only look once version4)深度学习框架的交通标志识别方法,并将该方法与SSD(single shot multi box detector)和Yolov3(You only look once version 3)算法进行对比,所提算法模型参数量显著增加。进一步对Yolov4的主干特征提取网络和多尺度输出进行调整,提出轻量化的Yolov4算法。仿真实验表明,此算法能够快速有效检测交通标志,具有实时性和适用性。展开更多
针对目前的行人检测方法无法在复杂环境下同时满足高准确率和高检测速度的问题,提出了基于改进YOLOv7(You Only Look Once version 7)的高效行人检测方法。首先,通过鬼影混洗卷积(GSConv)与VoVGSCSP(VoVNetGS Conv Cross StagePartial)...针对目前的行人检测方法无法在复杂环境下同时满足高准确率和高检测速度的问题,提出了基于改进YOLOv7(You Only Look Once version 7)的高效行人检测方法。首先,通过鬼影混洗卷积(GSConv)与VoVGSCSP(VoVNetGS Conv Cross StagePartial)构建Slim-Neck,前者使用混洗操作将普通卷积生成的信息渗透到可分离卷积的输出中,来实现通道间信息的交互,后者采用一次聚合方法设计了跨阶段部分网络,VoVGSCSP模块降低了计算量和网络结构的复杂性,并保持了足够的精度;其次,在YOLOv7输出部分引入卷积注意力模块(CBAM),利用通道注意力和空间注意力来捕获特征之间的相关性,从而优化YOLOv7的特征表示能力,提高方法的准确性和鲁棒性。实验结果表明:在多个行人数据集上,与YOLOv5和YOLOv7相比,改进的YOLOv7方法平均精度(AP)提升了1.63~3.51个百分点,对数平均缺失率(LAMR)降低了0.54~3.97个百分点;相较于YOLOv7平均检测速度提升10FPS;同时通过弗里德曼检验结果证实改进的YOLOv7方法可用于实际数据,有效地实现了复杂环境下高精度、快速的行人检测。展开更多
针对当前海珍品捕捞机器人使用的水下目标检测算法参数量大,不适合部署在移动设备上等问题,提出一种基于YOLOv7-tiny(You Only Look Once version 7-tiny)的轻量化海珍品检测算法ES YOLOv7-tiny(EfficientNet-S YOLOv7-tiny)。在YOLOv7-...针对当前海珍品捕捞机器人使用的水下目标检测算法参数量大,不适合部署在移动设备上等问题,提出一种基于YOLOv7-tiny(You Only Look Once version 7-tiny)的轻量化海珍品检测算法ES YOLOv7-tiny(EfficientNet-S YOLOv7-tiny)。在YOLOv7-tiny基础上,首先,将骨干网络替换为改进的EfficientNet(EfficientNet-S),并将颈部网络中卷积核大小为3×3卷积替换为轻量化卷积,达到降低参数量的目的;其次,使用k-means++算法聚类锚框尺寸,提高推理速度;最后,使用知识蒸馏算法进一步提高精度。在RUIE(Real-world Underwater Image Enhancement)数据集上,所提算法平均精度均值(mAP)达到73.7%,检测速度达到123 frame/s,参数量为4.45×10^(6),与原YOLOv7-tiny算法相比,在mAP上提升了1.2个百分点,检测速度提升25 frame/s,参数量降低了1.56×10^(6)。实验结果表明,所提算法在提升精度的同时降低了参数量,并且加快了检测速度,证明了该算法的有效性。展开更多
针对使用传统方法识别评估滑雪运动员的训练动作存在人为主观、准确率低等问题,提出了一种基于改进OpenPose和YOLOv5(You Only Look Once version 5)的动作分析算法。利用CSP-Darknet53(Cross Stage Paritial-Network 53)作为OpenPose...针对使用传统方法识别评估滑雪运动员的训练动作存在人为主观、准确率低等问题,提出了一种基于改进OpenPose和YOLOv5(You Only Look Once version 5)的动作分析算法。利用CSP-Darknet53(Cross Stage Paritial-Network 53)作为OpenPose外部网络将输入图片降维处理并提取特征图。融合优化YOLOv5算法,提取人体骨骼关键点构成人体骨架与标准动作进行对比,根据角度信息评分,并在模型中加入损失函数,量化实际检测动作与标准动作的误差。该模型可对运动员动作即时监控,能完成初步的动作评估。实验结果表明,检测识别准确率达到95%,可满足日常滑雪训练需求。展开更多
文章以临夏现代职业学院校园网互联网协议版本6(Internet Protocol version 6,IPv6)升级改造为例,介绍互联网协议版本4(Internet Protocol version 4,IPv4)校园网现状和存在的问题,比较IPv4与IPv6的优缺点,结合IPv6迁移改造的双协议栈...文章以临夏现代职业学院校园网互联网协议版本6(Internet Protocol version 6,IPv6)升级改造为例,介绍互联网协议版本4(Internet Protocol version 4,IPv4)校园网现状和存在的问题,比较IPv4与IPv6的优缺点,结合IPv6迁移改造的双协议栈、隧道技术,探索规划阶段性升级改造IPv6的建设方案,重点讨论IPv4/IPv6双栈校园网技术实现。展开更多
Drone or unmanned aerial vehicle(UAV)technology has undergone significant changes.The technology allows UAV to carry out a wide range of tasks with an increasing level of sophistication,since drones can cover a large ...Drone or unmanned aerial vehicle(UAV)technology has undergone significant changes.The technology allows UAV to carry out a wide range of tasks with an increasing level of sophistication,since drones can cover a large area with cameras.Meanwhile,the increasing number of computer vision applications utilizing deep learning provides a unique insight into such applications.The primary target in UAV-based detection applications is humans,yet aerial recordings are not included in the massive datasets used to train object detectors,which makes it necessary to gather the model data from such platforms.You only look once(YOLO)version 4,RetinaNet,faster region-based convolutional neural network(R-CNN),and cascade R-CNN are several well-known detectors that have been studied in the past using a variety of datasets to replicate rescue scenes.Here,we used the search and rescue(SAR)dataset to train the you only look once version 5(YOLOv5)algorithm to validate its speed,accuracy,and low false detection rate.In comparison to YOLOv4 and R-CNN,the highest mean average accuracy of 96.9%is obtained by YOLOv5.For comparison,experimental findings utilizing the SAR and the human rescue imaging database on land(HERIDAL)datasets are presented.The results show that the YOLOv5-based approach is the most successful human detection model for SAR missions.展开更多
文摘文章主要探讨了医院网际互连协议第4版(Internet Protocol version 4,IPv4)网络向网际互连协议第6版(Internet Protocol version 6,IPv6)网络过渡的通信技术应用,重点分析了双栈技术、隧道技术以及转换技术在过渡过程中的应用。通过华为eNSP模拟器构建双栈网络环境,并进行多项测试验证其可行性。结果表明,双栈技术能有效支持IPv4和IPv6协议共存,为医院网络现代化提供技术保障。同时,研究强调在过渡过程中需考虑的安全挑战和设备升级问题,提出了具体的实施策略和优化建议。
文摘The new version of“Dreamlike Lijiang”is now on stage with a brand-new appearance in Guilin,Guangxi,China. The show has been performing successfully over four years so far! As a leading role in the local show market,“Dreamlike Lijiang”has become a successful quintessence during the last four years,with its popularity and the applause from the audiences.As a famous brand in show industry,“
文摘针对无人机航拍图像目标检测中视野变化大、时空信息复杂等问题,文中基于YOLOv5(You Only Look Once Version5)架构,提出基于图像低维特征融合的航拍小目标检测模型.引入CA(Coordinate Attention),改进MobileNetV3的反转残差块,增加图像空间维度信息的同时降低模型参数量.改进YOLOv5特征金字塔网络结构,融合浅层网络中的特征图,增加模型对图像低维有效信息的表达能力,进而提升小目标检测精度.同时为了降低航拍图像中复杂背景带来的干扰,引入无参平均注意力模块,同时关注图像的空间注意力与通道注意力;引入VariFocal Loss,降低负样本在训练过程中的权重占比.在VisDrone数据集上的实验验证文中模型的有效性,该模型在有效提升检测精度的同时明显降低复杂度.
基金supported by Scientific Research Project of Tianjin Education Commission(Nos.2020KJ091,2018KJ184)National Key Research and Development Program of China(No.2020YFD0900600)+1 种基金the Earmarked Fund for CARS(No.CARS-47)Tianjin Mariculture Industry Technology System Innovation Team Construction Project(No.ITTMRS2021000)。
文摘Sea cucumber detection is widely recognized as the key to automatic culture.The underwater light environment is complex and easily obscured by mud,sand,reefs,and other underwater organisms.To date,research on sea cucumber detection has mostly concentrated on the distinction between prospective objects and the background.However,the key to proper distinction is the effective extraction of sea cucumber feature information.In this study,the edge-enhanced scaling You Only Look Once-v4(YOLOv4)(ESYv4)was proposed for sea cucumber detection.By emphasizing the target features in a way that reduced the impact of different hues and brightness values underwater on the misjudgment of sea cucumbers,a bidirectional cascade network(BDCN)was used to extract the overall edge greyscale image in the image and add up the original RGB image as the detected input.Meanwhile,the YOLOv4 model for backbone detection is scaled,and the number of parameters is reduced to 48%of the original number of parameters.Validation results of 783images indicated that the detection precision of positive sea cucumber samples reached 0.941.This improvement reflects that the algorithm is more effective to improve the edge feature information of the target.It thus contributes to the automatic multi-objective detection of underwater sea cucumbers.
文摘为了提高交通标志识别的速度和精度,提出了一种采用Yolov4(You only look once version4)深度学习框架的交通标志识别方法,并将该方法与SSD(single shot multi box detector)和Yolov3(You only look once version 3)算法进行对比,所提算法模型参数量显著增加。进一步对Yolov4的主干特征提取网络和多尺度输出进行调整,提出轻量化的Yolov4算法。仿真实验表明,此算法能够快速有效检测交通标志,具有实时性和适用性。
文摘针对目前的行人检测方法无法在复杂环境下同时满足高准确率和高检测速度的问题,提出了基于改进YOLOv7(You Only Look Once version 7)的高效行人检测方法。首先,通过鬼影混洗卷积(GSConv)与VoVGSCSP(VoVNetGS Conv Cross StagePartial)构建Slim-Neck,前者使用混洗操作将普通卷积生成的信息渗透到可分离卷积的输出中,来实现通道间信息的交互,后者采用一次聚合方法设计了跨阶段部分网络,VoVGSCSP模块降低了计算量和网络结构的复杂性,并保持了足够的精度;其次,在YOLOv7输出部分引入卷积注意力模块(CBAM),利用通道注意力和空间注意力来捕获特征之间的相关性,从而优化YOLOv7的特征表示能力,提高方法的准确性和鲁棒性。实验结果表明:在多个行人数据集上,与YOLOv5和YOLOv7相比,改进的YOLOv7方法平均精度(AP)提升了1.63~3.51个百分点,对数平均缺失率(LAMR)降低了0.54~3.97个百分点;相较于YOLOv7平均检测速度提升10FPS;同时通过弗里德曼检验结果证实改进的YOLOv7方法可用于实际数据,有效地实现了复杂环境下高精度、快速的行人检测。
文摘针对当前海珍品捕捞机器人使用的水下目标检测算法参数量大,不适合部署在移动设备上等问题,提出一种基于YOLOv7-tiny(You Only Look Once version 7-tiny)的轻量化海珍品检测算法ES YOLOv7-tiny(EfficientNet-S YOLOv7-tiny)。在YOLOv7-tiny基础上,首先,将骨干网络替换为改进的EfficientNet(EfficientNet-S),并将颈部网络中卷积核大小为3×3卷积替换为轻量化卷积,达到降低参数量的目的;其次,使用k-means++算法聚类锚框尺寸,提高推理速度;最后,使用知识蒸馏算法进一步提高精度。在RUIE(Real-world Underwater Image Enhancement)数据集上,所提算法平均精度均值(mAP)达到73.7%,检测速度达到123 frame/s,参数量为4.45×10^(6),与原YOLOv7-tiny算法相比,在mAP上提升了1.2个百分点,检测速度提升25 frame/s,参数量降低了1.56×10^(6)。实验结果表明,所提算法在提升精度的同时降低了参数量,并且加快了检测速度,证明了该算法的有效性。
文摘针对使用传统方法识别评估滑雪运动员的训练动作存在人为主观、准确率低等问题,提出了一种基于改进OpenPose和YOLOv5(You Only Look Once version 5)的动作分析算法。利用CSP-Darknet53(Cross Stage Paritial-Network 53)作为OpenPose外部网络将输入图片降维处理并提取特征图。融合优化YOLOv5算法,提取人体骨骼关键点构成人体骨架与标准动作进行对比,根据角度信息评分,并在模型中加入损失函数,量化实际检测动作与标准动作的误差。该模型可对运动员动作即时监控,能完成初步的动作评估。实验结果表明,检测识别准确率达到95%,可满足日常滑雪训练需求。
文摘文章以临夏现代职业学院校园网互联网协议版本6(Internet Protocol version 6,IPv6)升级改造为例,介绍互联网协议版本4(Internet Protocol version 4,IPv4)校园网现状和存在的问题,比较IPv4与IPv6的优缺点,结合IPv6迁移改造的双协议栈、隧道技术,探索规划阶段性升级改造IPv6的建设方案,重点讨论IPv4/IPv6双栈校园网技术实现。
文摘Drone or unmanned aerial vehicle(UAV)technology has undergone significant changes.The technology allows UAV to carry out a wide range of tasks with an increasing level of sophistication,since drones can cover a large area with cameras.Meanwhile,the increasing number of computer vision applications utilizing deep learning provides a unique insight into such applications.The primary target in UAV-based detection applications is humans,yet aerial recordings are not included in the massive datasets used to train object detectors,which makes it necessary to gather the model data from such platforms.You only look once(YOLO)version 4,RetinaNet,faster region-based convolutional neural network(R-CNN),and cascade R-CNN are several well-known detectors that have been studied in the past using a variety of datasets to replicate rescue scenes.Here,we used the search and rescue(SAR)dataset to train the you only look once version 5(YOLOv5)algorithm to validate its speed,accuracy,and low false detection rate.In comparison to YOLOv4 and R-CNN,the highest mean average accuracy of 96.9%is obtained by YOLOv5.For comparison,experimental findings utilizing the SAR and the human rescue imaging database on land(HERIDAL)datasets are presented.The results show that the YOLOv5-based approach is the most successful human detection model for SAR missions.