Abstract:
In order to solve the problems that the current continuous facial expression generation model is easy to produce artifacts in the expression-intensive areas and the expression control ability is weak, the GANimation model is improved for increasing the accuracy of the AU control of the expression muscle motor unit. A multi dimension feature fusion (MFF) module is introduced between the encoding and decoding feature layers of the generator, and the obtained fusion features are used for image decoding in a long-hop connection. A layer of inverse convolution is added to the decoding part of the generator to facilitate the addition of the MFF module to be more efficient and reasonable. Comparing experiments with the original network on the self-made data set, the accuracy of expression synthesis and the quality of the generated images of the improved model have been increased by 1.28 and 2.52 respectively, which verifies that the improved algorithm has better performance in facial expression editing when the image is not blurred and artifacts exist.