In the context of Structural Health Monitoring (SHM), the necessity to develop advanced strategies for real-time bridge assessment has recently generated significant attention worldwide. The main reason lies in the urging need to manage the increasingly number of ageing infrastructures which are inevitably exposed to progressive degradation phenomena. In order to avoid catastrophic failures and to preserve the key role of bridges in modern transport networks, vibration-based SHM systems have been widely explored for structural damage identification. Following the recent interest towards Artificial Intelligence (AI), this paper proposes an unsupervised machine learning-based damage detection technique, exploiting a kind of artificial neural network, called autoencoder. The method only requires raw acceleration sequences collected from a healthy structure to train the autoencoder. Then, unknown testing data are fed into the trained model for assessing reconstruction capability and anomaly detection. In order to quantify the discrepancies between the original and the reconstructed acceleration sequences, three indexes of reconstruction loss are selected as damage-sensitive features. The idea of the proposed damage detector is to properly fix a specific threshold for every feature, based on the analysis of time series in healthy conditions, to obtain a classification criterion. Therefore, unknown sequences with features exceeding the previously determined threshold are classified as damaged. The methodology presented in this work is applied to the Z24 bridge. The obtained results demonstrate that the use of the autoencoder may be a valid alternative to other approaches, such as Operational Modal Analysis (OMA)-based methods, for bridge health assessment and local damage detection purposes. Moreover, being computationally efficient and working at the level of the single sensor, it is especially suitable to be adopted for real-time and on-site implementation.
Deep learning monitoring of the Z24 benchmark bridge
Milani A.
2023-01-01
Abstract
In the context of Structural Health Monitoring (SHM), the necessity to develop advanced strategies for real-time bridge assessment has recently generated significant attention worldwide. The main reason lies in the urging need to manage the increasingly number of ageing infrastructures which are inevitably exposed to progressive degradation phenomena. In order to avoid catastrophic failures and to preserve the key role of bridges in modern transport networks, vibration-based SHM systems have been widely explored for structural damage identification. Following the recent interest towards Artificial Intelligence (AI), this paper proposes an unsupervised machine learning-based damage detection technique, exploiting a kind of artificial neural network, called autoencoder. The method only requires raw acceleration sequences collected from a healthy structure to train the autoencoder. Then, unknown testing data are fed into the trained model for assessing reconstruction capability and anomaly detection. In order to quantify the discrepancies between the original and the reconstructed acceleration sequences, three indexes of reconstruction loss are selected as damage-sensitive features. The idea of the proposed damage detector is to properly fix a specific threshold for every feature, based on the analysis of time series in healthy conditions, to obtain a classification criterion. Therefore, unknown sequences with features exceeding the previously determined threshold are classified as damaged. The methodology presented in this work is applied to the Z24 bridge. The obtained results demonstrate that the use of the autoencoder may be a valid alternative to other approaches, such as Operational Modal Analysis (OMA)-based methods, for bridge health assessment and local damage detection purposes. Moreover, being computationally efficient and working at the level of the single sensor, it is especially suitable to be adopted for real-time and on-site implementation.I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.


