"Fourier's theorem tells us that every curve, no matter what its nature may be, or in what way it was originally obtained, can be exactly reproduced by superposing a sufficient number of simple harmonic curves - in brief, every curve can be built up by piling up waves."
We often visualize a time series graphically, as a set of measurements plotted two-dimensionally: time is usually on the horizontal axis, and the magnitude of the quantity we are measuring is on the vertical axis. Hence, we refer to the "time domain" as a point of view in which time is the independent variable.
If we wish, we can represent any periodic function of time as a sum of sines and cosines, which is called the Fourier series expansion of the function. The field of harmonic analysis deals with methods of computing Fourier series.
A Fourier series has a natural graphical representation, as a spectrum. In a Fourier spectrum, we plot the frequency of each sine component, usually on the horizontal axis, and the corresponding amplitudes on the vertical axis. So a Fourier spectrum is a point of view in which frequency, rather than time, is the independent variable: it reveals the "frequency domain" to us.
It's important to note that transforming from the time domain to the frequency domain and back again, if done properly, neither creates nor destroys information. What is in the time series is in the spectrum, and vice versa -- the transform simply reorganizes what was there in the first place, and the only reason we really care about the frequency domain is that we have found that some of that information is more readily recognizable when we see it in a spectrum.
Jean Baptiste Joseph Fourier accompanied Napoleon to Egypt, and was governor of Lower Egypt in the years just before his formulation of harmonic analysis, in 1807, which dealt with infinite periodic functions. Only periodic time series have Fourier series representations. The HR time series we usually study are many things, but never periodic.
Of course, there is a way to apply Fourier techniques to non-periodic time series. Since our series are of finite length, we can, if we wish, pretend that they repeat themselves, endlessly. Perhaps surprisingly, this works well enough for many applications, and it even has a name: Dirichlet windowing.
(I learned while compiling this talk that Gustav Dirichlet, though German, came from a Belgian family -- hence his French surname -- and that he was a brother-in-law of Felix Mendelssohn. His first paper was a partial proof of Fermat's last theorem for the case n = 5. Dirichlet also deserves credit for the first rigorous proof of Fourier's theorem.)
If, for example, our time series measures a quantity that grows significantly during the period of observation, the periodic series that we get if we stitch copies of itself end-to-end will include a big discontinuity at each seam.
When we look at a Fourier spectrum made from such a series, the most obvious feature is a huge component at the frequency that corresponds to the length of the original time series -- in other words, an artifact of our stitching process.
The basic idea of windowing is to reduce these "end effects" by forcing the joins to match. We begin by defining a window function, and there are many from which to choose, with names such as Bartlet, Parzen, Welch, Blackman, Hann, Hamming, and more. A common feature of all these functions is that they begin and end at zero, and reach a value of one in the middle. We stretch out the window function so that its length matches our time series, and then we multiply the time series by the stretched-out window. In the middle of the window, the product matches our original series, and at the beginning and end, the product is zero -- so it is in this way that windows force the endpoints to match.
The difference among the window functions is how much of the central region has a value of one (in other words, how much of the original time series will be left unaltered by the window function) and in how the transitions between one and zero are managed at each end. There is much less significance in these details than is sometimes assigned to them. What's important is that any reasonable windowing function is at least a little better than not using a windowing function at all, and that you should be careful about comparing spectra made using different window functions, particularly when looking at their lowest-amplitude features.
If the original time series has a non-zero mean (as, for example, any heart rate time series does), the windowing function introduces a significant distortion by artificially lowering the values near the beginning and end of the series to zero. If uncorrected, this results in a large artifact at the extreme low end of the spectrum. It is easiest and best to correct for this problem before windowing, by subtracting the mean value of the time series from each of the samples, a process that ensures that the mean of the shifted series is zero; hence, this process is inelegantly called "zero-meaning".
A similar problem arises if the input series has a persistent linear trend from beginning to end, and a similar solution is used. We can fit a (least-squares fit) line to the series and subtract it from the samples, in order to detrend the input series.
The presence of a significant trend in a time series should not be ignored, however. A fundamental assumption of harmonic analysis is stationarity of the input time series. Although we can go through the computations needed to produce a spectrum even when the input is non-stationary, the output of the process is not strictly a spectrum (a representation of signal content as a function of frequency) unless the input is stationary. Generally, a time series can be considered stationary if its first- and second-order statistics (means and sample variances) do not change significantly over the duration of the series.
Fourier's original formulation of harmonic analysis treated continuous functions with analytic expressions. Fortunately, we don't need to develop analytic expressions for our time series of observations in order to study their frequency-domain characteristics.
The discrete Fourier transform (DFT) can be applied to a set of uniformly sampled measurements to produce a discrete frequency spectrum, in which the amplitude of the spectrum is defined only for a set of discrete frequencies. The highest of these frequencies is half that of the sampling frequency, a limit known as the Nyquist frequency. In terms of information theory, no information about higher frequencies can be unambiguously encoded in the original time series.
[I omitted this section when I gave this talk, for reasons of time -- but it is an important subject.]
This brings us to an important point about observing time series. If we decide to measure the height of the tide once an hour, we have a reasonable expectation of observing something close to the true high tide. If we sample once every four hours, we might get lucky, or perhaps not. If we sample once a day, we can't expect to get accurate daily estimates of a variable that fluctuates more rapidly than we are taking measurements. So we need to have a reasonable idea of the highest frequency that is present in the time series, and then we need to take measurements at least twice as fast as that, so that the highest frequency in the input is at or below the Nyquist frequency. In practice, we should try to choose a sampling frequency well above twice the highest frequency in the input.
The problem with "undersampling" (using a sampling frequency lower than twice the highest frequency in the input) is not only that we don't get to measure that component of the signal, but also that the component we can't measure directly appears in our time series as a lower-frequency component. This phenomenon, in which an unwanted high-frequency signal appears as a phantom low-frequency signal, is known as aliasing. Aliasing is bad because when it occurs, we can't trust the measurements we have made.
The standard solution to aliasing is the use of "antialiasing filters", analog components of the recording equipment that remove high-frequency components of the continuous signal that is to be sampled. These are universally used in applications such as digital audio recording. In HRV analysis, however, we have no continuously observable signal that we can filter; we are constrained to obtaining glimpses of the (ideal) signal once per cardiac cycle.
Fortunately, HRV time series are largely immune to aliasing, since the effective sampling frequency of once per cardiac cycle is usually more than twice the highest frequencies of interest in the HRV spectrum. This is not always so, however, and it is important to be aware that aliasing may confound analysis of HRV spectra, particularly when the mean heart rate is relatively low.
In 1965, the mathematicians James Cooley (1926-) and John Tukey (1915-2000) published a seminal paper, "An algorithm for the machine calculation of complex Fourier series," in which they described efficient algorithms for transforming discrete series with non-prime lengths, by a clever divide-and-conquer method involving DFTs of shorter subseries. The Cooley-Tukey algorithm and variants are known as the fast Fourier transform (FFT). The FFT is not a fast approximation of the DFT; it is a highly efficient implementation of the DFT, in which the intermediate calculations are rearranged to reduce redundancy and the effects of error propagation in finite-precision arithmetic. In other words, the FFT is faster and better than the older DFT algorithm.
The FFT operates most efficiently on input series that have a length of exactly a power of two. The speed advantage is so great that very few Fourier analyses are performed on series with other lengths. Common practice is to zero-pad the input series after subtracting the mean, that is, to append additional zero values to increase its length to the next larger power of two.
(In a 1997 interview, Cooley acknowledged the earlier work of Gauss, who worked out and actually used a factorization of the Fourier series about 150 years earlier than Cooley and Tukey; but then, as Cooley noted, N wasn't very big [at that time].)
Another widely-used approach to harmonic analysis of time series goes by several names: autoregressive (AR) spectral modelling, maximum entropy method (MEM), and "all-poles" estimation. These names refer to a technique that fits a function of frequency with a predefined number of poles (frequencies of infinite density) to the spectrum of the input time series.
What got physicists excited about AR spectral modelling is that the poles are ideal models for resonance-type phenomena, such as "line" features in spectra, that are not particularly well-modelled by FFT spectra. HR spectra generally don't have "line" features (although there are notable exceptions for subjects with fixed-rate pacemakers, and for cases with certain types of recording artifacts).
AR spectra are visually appealing; they look clean and uncluttered in comparison with typical FFT spectra. As a result, it is tempting to replace a messy-looking FFT with an attractive AR spectrum. Should we do so?
AR spectral estimation reduces an input series of N points (degrees of freedom) to a function with P degrees of freedom, where P is the number of poles and is generally much less than N. In terms of information theory, the uncluttered appearence of AR spectra is easily understood -- much of the information in the input time series, as measured by the degrees of freedom, has been thrown away!
Of course, much of that information might be noise, anyway. The important point to recognize about AR spectra is that they reflect not only the underlying input but also an assumption about its complexity. If we are confident about our assumptions, and we state them clearly, AR spectra have a useful place in our armamentarium of frequency domain analysis methods; at all other times, we should be mindful of what may be hidden behind that clean-looking spectrum.
An issue that affects nearly all frequency-domain analysis of HRV is that of non-uniform sampling. By this, we mean that the time intervals between successive samples of our time series are not constant.
In HRV analysis, nonuniform sampling is a given. If the time intervals between beats were constants, there would be no HRV, and no HRV analysis, and no course on HRV analysis, and we could all be sailing right now.
Conventional methods of harmonic analysis, including all Fourier methods as well as AR methods, depend on uniform sampling intervals. So when we decide to apply these methods to HRV analysis, we have a choice:
The most widely used methods for resampling are based on integrated pulse-frequency modulation (IPFM) models introduced in 1973 by Barry Hyndman and RK Mohn.
The underlying idea is that we can overlay the sequence of times of occurrence of the beats on a regular grid with intervals matching the desired sampling interval. Within each sampling interval, we can count the number of complete RR intervals and the fractions of the intervals included at the beginning and end of the sampling interval. Thus we may find that a particular sampling interval of (say) 1 second contains an entire RR interval plus fractions of two others, adding up to 1.5 RR intervals. We can then calculate the instantaneous HR during that one-second interval as 1.5 beats per second, or 90 beats per minute. In practice, the desired sampling interval is usually chosen to be 0.25 or 0.5 seconds.
It is rarely possible to acquire a long HR time series without ectopic beats or beats that cannot be detected reliably. When interruptions in the sequence of sinus beats occur, frequency domain methods can produce poor quality and even misleading outputs, because the impulse in HR that accompanies such an interruption in the time domain corresponds to broad-band noise in the frequency domain.
Some investigators advocate selecting only uninterrupted sequences of sinus intervals for HRV analysis. Aside from the practical difficulties of obtaining such data, it should be recognized that noise and ectopy are often correlated with physical and mental stress and exertion and with other changes in physiology that may be relevant to studies of HRV. Selection of "perfect" data carries a very real risk of introducing selection bias of an undesired nature.
Other investigators delete the interruptions and concatenate the surrounding data. Long-term correlations in the time series can be disturbed by this approach.
A third approach is to fill in (interpolate) missing sinus beats as needed. This can work very well if the gaps are due to compensated ventricular ectopy or to noise; supraventricular ectopic beats that reset the sinus pacemaker cannot be avoided in this way, however, so it may also be necessary to detect outliers in the instantaneous HR time series and replace them with interpolated values.
Some proponents of "beat-number domain" analysis point out that resampling and interpolation of missing beats is no less questionable than pretending that all of the RR intervals are the same when that assumption is convenient. There is yet another approach that avoids all of these problems.
The Lomb periodogram method, which was introduced by astronomer Nicholas Lomb in 1976 as a method for estimating the period of a variable star from a set of unevenly spaced measurements, is in many ways an ideal choice for frequency domain analysis of HR time series. Its basic principle is that one can fit a sine wave of any frequency to a set of observations using least-squares methods, and (having done so) one then has one point of a spectrum. We can subtract that component, and then fit another one, and another, until we have a spectrum with whatever frequency resolution we wish.
[An interesting feature of the Lomb periodogram is that the Nyquist frequency is "fuzzy"; since the intervals between samples are non-uniform, the highest frequencies that can be measured reflect the information obtained from the most closely-spaced samples. Since these measurements are necessarily obtained from a small subset of all of the input samples, there is a greater uncertainty associated with them than with those at lower frequencies. The Lomb method is unique, however, in providing any information at all for frequencies above the Nyquist frequency implied by the mean sampling interval.]
In 1989, Bill Press and George Rybicki devised a fast algorithm for computation of the Lomb periodogram, which has something of the flavor of the FFT in that it rearranges and partitions the input time series in order to reduce the amount of redundant calculations. This algorithm is, as for the FFT and the AR spectral estimation methods, an algorithm whose running time for an N-point input series is proportional to N log N, so any of these methods is equally practical for long and short time series. I applied the Lomb method to HRV analysis in a short paper in 1993, and it has become quite popular since.
It doesn't matter if the data are unevenly spaced or even if there are very large gaps; there is no need to imagine what isn't there, no need to resample observations at uniform intervals, and (importantly) no reason to be particularly forgiving of observations of questionable quality. We can be quite demanding of quality in each measurement, since discarding a questionable datum doesn't mean having to throw out the entire data set, or having to make up a replacement by interpolation.
(Although the method is tolerant of occasional large gaps, excessive "clumpiness" in the input time series can affect the normalization.)
To recap, the central idea of harmonic analysis is to represent a time series as a sum of sinusoids. The spectrum summarizes this analysis by providing the magnitudes and phases of these sinusoids as functions of frequency. Most often, the magnitude spectrum is squared to get the power spectral density function (or power spectrum), and measurements used in studies of HRV come from this power spectrum.
We can sum the power at each defined frequency (or we can integrate the power spectral density function to get the area under the curve) in order to obtain the total power in a frequency band of interest. The common definitions are:
|total HRV power||0 - 0.5 Hz|
|ultra-low frequency (ULF) power||0 - 0.0033 Hz|
|very low frequency (VLF) power||0.0033 - 0.04 Hz|
|low frequency (LF) power||0.04 - 0.15 Hz|
|high frequency (HF) power||0.15 - 0.4 Hz|
|very high frequency (VHF) power||0.4 - 0.5 Hz|
The units of these parameters are often not cited, making it difficult to compare parameter values across studies. If the input series contains values in beats per second, the power spectral density function has units of seconds squared per Hz, and the units of power are seconds squared, often converted to milliseconds squared to bring the numerical values into a reasonable range. Typical values of LF and HF power for healthy adults are several hundred to a thousand milliseconds squared. ULF and VLF power are generally higher (several thousand ms squared for VLF, an order of magnitude more for ULF). VHF power is usually negligible; when it is not, this is a good indicator that aliasing may be an issue, perhaps because of an unusually high respiration rate, an unusually low mean heart rate, or both.
In addition, the LF/HF ratio is commonly cited as a parameter of interest. In healthy adults, the LF/HF ratio is typically between 1.5 and 4.5.
Since the sympathetic and parasympathetic (vagal) branches of the autonomic nervous system have differing characteristic time constants, it is generally believed that high frequency power (HF, 0.15 - 0.4 Hz in adults) is a measure of parasympathetic (vagal) tone, while the low frequency power (LF, 0.04 - 0.15 Hz in adults) reflects both sympathetic and parasympathetic activity. The LF/HF ratio, when high, is considered to reflect dominance of sympathetic activity, and when low, is thought to reflect dominance of vagal activity, hence it is sometimes cited as an index of sympatho-vagal balance.
These interpretations, though widely accepted, remain somewhat controversial.
What would Fourier, who first explored the frequency domain in the context of the propagation of heat, think of applying his methods to studies of the rhythm of the heart? He's not here to speak for himself, but he left us these words:
"Mathematics compares the most diverse phenomena and discovers the secret analogies that unite them."