融合特征编码的面部表情编辑技术

Facial Expression Editing Technology with Fused Feature Coding

  • 摘要: 为解决当前连续面部表情生成模型易在表情密集区域产生伪影、表情控制能力较弱等问题,该文对GANimation模型进行了研究改进,提高对表情肌肉运动单元AU控制的准确度。在生成器的编码和解码特征层之间引入多尺度特征融合(MFF)模块,以长跳跃连接的方式将得到的融合特征用于图像解码。在生成器的解码部分中加入一层逆卷积,便于MFF模块添加,更加高效合理。在自制的数据集上与原网络进行对比实验,表情合成的准确度和生成的图像质量分别提高了1.28和2.52,验证了该算法在生成图像没有模糊和伪影存在的情况下,面部表情编辑能力得到加强。

     

    Abstract: In order to solve the problems that the current continuous facial expression generation model is easy to produce artifacts in the expression-intensive areas and the expression control ability is weak, the GANimation model is improved for increasing the accuracy of the AU control of the expression muscle motor unit. A multi dimension feature fusion (MFF) module is introduced between the encoding and decoding feature layers of the generator, and the obtained fusion features are used for image decoding in a long-hop connection. A layer of inverse convolution is added to the decoding part of the generator to facilitate the addition of the MFF module to be more efficient and reasonable. Comparing experiments with the original network on the self-made data set, the accuracy of expression synthesis and the quality of the generated images of the improved model have been increased by 1.28 and 2.52 respectively, which verifies that the improved algorithm has better performance in facial expression editing when the image is not blurred and artifacts exist.

     

/

返回文章
返回