文章摘要
张立国,胡林.基于改进HopeNet的头部姿态估计方法[J].高技术通讯(中文),2024,34(5):486~495
基于改进HopeNet的头部姿态估计方法
Head pose estimation method based on improved HopeNet
  
DOI:10. 3772 / j. issn. 1002-0470. 2024. 05. 005
中文关键词: 头部姿态估计; HopeNet; 特征融合; 特征压缩激励; 自适应学习
英文关键词: head pose estimation, HopeNet, characteristics of the fusion, characteristic compression and excitation, adaptive learning
基金项目:
作者单位
张立国 (燕山大学测试计量技术与仪器重点实验室秦皇岛 066004) 
胡林  
摘要点击次数: 194
全文下载次数: 177
中文摘要:
      针对基于无需先验知识的头部姿态估计算法在复杂背景图像和多尺度图像场景下精度较差的问题,提出了一种基于改进HopeNet的头部姿态估计方法。首先在主干网络结构上增加特征融合结构使得模型能够充分利用网络的深层特征信息与浅层特征信息,提升模型的特征解析力;然后在主干网络的残差结构中增加特征压缩激励模块,使得网络能够自适应学习不同特征层重要程度的权重信息,让模型更加关注目标信息。实验结果表明,相较于HopeNet,本文方法在AFLW2000数据集上精度提升了31.15%,平均误差降到4.20°,同时在复杂背景图像场景下有较好的鲁棒性。
英文摘要:
      Aiming at the poor accuracy of the head pose estimation algorithm based on no prior knowledge in complex background images and multi-scale image scenes, a head pose estimation method based on improved HopeNet is proposed. Firstly, the feature fusion structure is added to the backbone network structure to make the model make full use of the deep and shallow feature information of the network and improve the feature analysis power of the model. Then feature squeeze and excitation module is added to the residual structure of the backbone network, so that the network can adaptively learn the weight information of different feature layers and the model can pay more attention to the target information. Experimental results show that compared with HopeNet, the accuracy of the proposed method on AFLW2000 dataset is improved by 31.15%, and the average error is reduced to 4.20°. Meanwhile, the proposed method has good robustness in complex background image scenes.
查看全文   查看/发表评论  下载PDF阅读器
关闭

分享按钮