Electrocardiogram (ECG) signals are essential in the diagnosis of heart diseases. Their acquisition consists in applying from 4 to 10 electrodes on the body, and for long recordings the signals can be acquired even over 24 hours, thus producing a large volume of data to be stored on portable devices. Moreover, the progress in technology allows an improvement of the acquisition precision (e.g. sampling rate, resolution), leading to a further grow of the amount of digital ECG data. We tackled the problem of ECG signal compression using sparsity recovery techinques.

The model

The overall ECG compression method we propose [1] is sketched in the block diagram of Fig. 1. It consists of four stages, described below:
  1. signal preprocessing through standard filtering for wandering removal, R-peaks detection and normalization based on zero-padding of centered RR-segments
  2. dictionary construction over natural basis extracted from the initial transient of the normalized record
  3. online sparse decomposition through the sparsity solver k-LiMapS combined to the Least-Squares Projection (LSP) and by resorting to the Discrete Wavelet Transform (DWT) in case of either long or non-sparsifiable segments
  4. quantization and compression of the coefficients carried out both by the sparsity process and (possibly) by DWT using the arithmetic coding.
ECG framework diagram
Fig. 1: Diagram of the ECG signal compression framework based on sparsity.

Comparison with early work

Early work of the ECG compression method framed in the sparse representation field has been presented in [2]. In such a work the problem of succinctly representing heart beats is recast into a regularization problem with approximate constraints. These constraints are expressed by a dictionary built on normalized initial transient of the original ECG record. In particular, for each patient his/her own dictionary is built, collecting and aligning the R-R intervals of the first part of the ECG signals (about 5-10 min), while compressing the remaining ones. The problem is hence tackled by the k-LiMapS sparsity solver, which is essentially an iterative scheme aimed at finding the sparsest solution of the dictionary-based linear system. The present contribution introduces major improvements in the core of the k-LiMapS algorithm toward two directions: on the one hand the reconstruction quality (PRDN, i.e. Normalized Percentage Root-mean-square Difference) requirements are intrinsically included into the sparsity recovery scheme, thus resulting into a PRDN guaranteed method; on the other hand we introduce at the end of the sparsity recovery a step of least-squares projection yielding the optimal point within the subspace spanned by the atoms selected by k-LiMapS. As minor improvement, we have introduced Tikhonov regularization into the sparsity solver in order to make it more robust against zero-padding normalization. Moreover, we tackle the rare cases of non-sparsifiable ECG segments, i.e. segments requiring too many dictionary atoms for their reconstruction. To this end we resort to a backup procedure that uses standard wavelet transforms. Thanks to all these improvements, the results in the final model outperform those in the early version [2].


[1] G. Grossi, R. Lanzarotti, J. Lin. High-rate compression of ECG signals by an accuracy-driven sparsity model relying on natural basis. Digital Signal Processing 45, 96-106, 2015.
[2] A. Adamo, G. Grossi, R. Lanzarotti, J. Lin. ECG compression retaining the best natural basis k-coefficients via Sparse decomposition. Biomedical Signal Processing and Control 15, 11–17, 2015.