基于对抗生成网络的多风格化的汉字

Learning to Write Multi-Stylized Chinese Characters by Generative Adversarial Networks

  • 摘要: 随着生成对抗网络(GAN)的发展,中文字体转换领域的研究越来越多,研究者能够生成高质量的汉字图像。这些字体转换模型可以使用GAN将源字体转换为目标字体。然而,目前的方法有以下局限:1)生成的图像模糊;2)模型一次只能学习和生成一种目标字体。针对这些问题,该文开发了一种全新的模式来执行中文字体转换。首先,将字体信息附加到图像上,告诉生成器需要转换的字体;然后,通过卷积网络提取和学习特征映射,并使用转置卷积网络生成照片真实图像。使用真实图像作为监控信息,以确保生成的字符和字体与它们自身一致。这个模型只需要训练一次,就能够将一种字体转换为多种字体并生成新的字体。对7个中文字体数据集的大量实验表明,该方法在中文字体转换中优于其他几种方法。

     

    Abstract: With the development of Generative Adversarial Networks (GAN), more and more researches have been conducted in the field of Chinese fonts transformation and researchers are able to generate high-quality images of Chinese characters. These font transformation models can transform a source font to a target font using GAN. However, current methods have limitations that 1) generated images are oftentimes blurry and 2) models can only learn and produce one target font at a time. To address these problems, we have developed a brand-new model to perform Chinese font transformation. First, font information is attached to images to tell the generator the fonts that we want to transform. Then, the generator extracts and learns feature mappings through convolutional networks and generates photo-realistic images using transposed convolutional networks. The ground truth images are then used as supervisory information to ensure that characters and fonts generated are consistent with themselves. This model only needs to be trained once, but it is able to transform one font to multiple fonts and produce new fonts. Extensive experiments on seven Chinese font datasets show the superiority of the proposed method over several other methods in Chinese font transformation.

     

/

返回文章
返回