Abstract:
Existing acupoint detection (AD) approaches suffer from extra-equipment-dependent, shallow feature representation, and poor accuracy issues. In this work, the AD task is defined as the key-point detection based on visual images by analyzing the task nature. A novel paradigm called facial acupoint detection by reconstruction (FADbR) is designed to achieve the facial AD task. Firstly, the adversarial autoencoder architecture serves as the backbone network based on the self-supervised learning mechanism. The image-to-image reconstruction procedure is performed to enhance the feature representation ability, in which the neural architecture is applied to capture hidden representations and abstract knowledge of the human face. In succession, the FADbR framework is constructed to implement the AD task in a supervised manner by designing the interleaved layers to output the heatmap for each acupoint. Because of the reconstruction procedure, a fine-grained model can be achieved to improve AD performance by the learned facial representations. A new dataset called FAcupoint is built to validate the proposed approach using a public human face dataset. Experimental results on the new dataset demonstrate that the proposed FADbR framework has the ability to extract high-level feature representation to improve AD performance. Most importantly, the FADbR framework can achieve preferred performance with small training samples, which further validates the reconstruction paradigm in this work.