基于门控注意力网络的电流互感器在线监测方法

An online monitoring method for current transformers based on a gated transformer model

  • 摘要: 随着电网服务需求的不断提高,电力互感器作为电网系统重要设备,对其在线监测和状态评估提出了更高的要求。传统电磁式电流互感器需停电进行离线校准,难以反映真实运行状态,增加了运维复杂性和成本,影响了电能计量准确性和电网稳定性。针对电流互感器误差在线监测缺乏标准互感器作为参考的难题,提出了一种基于门控注意力Transformer模型(Gatedformer)的在线监测方法,可通过学习多路电流数据特征,准确预测未来电流互感器标准值,从而计算其误差值。该方法通过维度转置操作能够有效利用注意力机制集中于时序特征的相关性,增强多路数据之间的特征解耦能力;门控注意力机制则进一步优化了时间序列依赖关系的捕捉效果,显著提升了长时间序列预测的性能。实验结果显示,模型在三路电流数据集上的平均预测误差为0.090%,,为实现电流互感器在线监测提供了有力支撑。

     

    Abstract: With the development of big data, the Internet of Things, cloud computing, and artificial intelligence technologies, this paper proposes an online monitoring and state evaluation method based on these technologies to improve the accuracy, reliability, and economy of power systems. Traditional electromagnetic current transformers require offline calibration with power outages, making it difficult to reflect their actual operating status, increasing operational complexity and costs, and affecting the accuracy of energy metering and the stability of the power grid. Addressing the issue of the lack of a standard transformer as a reference for online error monitoring of current transformers, this paper proposes an online monitoring method based on the Gated Attention Transformer model (Gatedformer). This method uses multiple current data inputs, leverages the Gatedformer model to learn data features, and accurately predicts future standard values of current transformers. Specifically, the method embeds time point variables through the attention mechanism to capture multivariable correlations; it uses a gated attention network to capture the long-term dependency characteristics of time series and applies a feedforward network to learn nonlinear representations. Experimental results show that under the condition of predicting the standard current value window for the next day, the model achieves a prediction error of only 0.090% in the online monitoring of current transformers, outperforming existing models.

     

/

返回文章
返回