Movement Analysis

This demo shows an automated approach to classifying different types of human activity. The considered motions are sitting, standing, the acts of sitting down and standing up, walking, teethbrushing and picking something up from the ground.

The concept of this classification is to take data of several inertial measurement units and process that to determine the current activity. Inertial measurement units, usually shortened to IMUs, are small and relatively cheap sensor packages that combine accelerometers and gyroscopes in several axes. This makes it possible to track both linear and angular motions.

Schematic of the placement of the used sensors with short handles for the respective body parts.

Data acquired of the relevant activities with nine IMUs, which were placed as shown in the schematic above, was used to develop the implemented signal processing chain. This processing chain can be seen in the main window of KiRAT under medical signal processing - motion signal processing - movement classification and is shown in the screenshot below.

Screenshot of the signal processing chain in KiRAT.

The samplerate conversion is only needed with live sensor data and therefore inactive in the present demo. The signal selection then splits the data into accelerometer and gyroscope channels, which are subsequently preprocessed. In these preprocessing steps, some filtering is performed to denoise the sensor data (low-pass filtering) and to remove constant components of the accelerometer data caused by gravity. The respective filter parameters can be studied by clicking on the blocks of interest inside the preprocessing modules.

Most of the processing takes place in the following feature extraction block. This has a more complicated structure shown in the following screenshot, taken once more from the main KiRAT window.

Screenshot of the feature extraction signal processing chain in KiRAT.

For a start, the angles between the sensor packages and the ground are calculated and vectornorms as well as absolute values of the signals are computed. The vectornorms are then used to estimate the autocorrelation of the measured movements signals, which is later facilitated in finding periodicities in the motions.

All of this is then used to determine numeric features in a frame-based real-time way. These features comprise a possible estimated frequency of the arms' and legs' movement respectively, an averaged angle between the thighs and the ground as well as a difference between that angle over a short timespan. In addition, the vectornormed, high-pass filtered accelerometer signals are used to determine a measure for the overall activity and time averaging is used for the processed gyroscope signals to get a measure for the bending of the lower back to single out the picking-up activity.

Right now, only a small portion of all calculated feature values is used for the decision, therefore the feature selection block singles those out. This block makes it possible to change the used features based on the decision method used, once there are different algorithms implemented.

The current decision method used is a classification tree that is split based on feature values chosen based on empirical analysis of the recorded data. A graphic of the implemented tree can be seen below.

Decision tree implemented in the movement classification KiRAT demo.

Once processing is started in KiRAT (by pressing the 'Start' button in the upper left of the main window), a short recording of motions is replayed from a file as if the sensors would input the data live. The recording starts with sitting, standing up and standing, which is followed by a picking up motion. Afterwards, a period of teethbrushing and some walking follows and a sitting down motion towards sitting ends the recorded cycle. This loops until the 'Stop' button is pressed in KiRAT. During processing, the decision is shown in the block named 'Visual' in the main window. Colored blocks in a plot show which activity is currently classified by the decision tree as shown above.

Screenshot of of the visualization of the decision.

At the same time, the additional window shows several different plots of the estimations the autocorrelation of the vectornormed accelerometer signals of the different IMUs. For each sensor package, there are three different plots:

  • One plot with the regular (biased) autocorrelation and an unbiased version of that.
  • Another plot in which the autocorrelations are normalized in a way that the correlation at 0 lag is exactly 1.
  • A third plot of the autocorrelation over time in a colormapped representation.

All of these are especially helpfull for the real-time evaluation of present input by a user. The periodicity with higher frequency of the teethbrushing motion is especially apparent in these plots. A periodicity of both arm- and legswing during walking can also be seen, though this is not as pronounced which is due to the short distances walked for the recording. Other movement also leads to apparent activity in the plots but lacks the line characteristics of periodic motions.

Additional plotter window showing autocorrelations of different sensor packages.

The plotter in the main window shows preprocessed data of the left thighs' accelerometer for the separate axes. This can be used to comprehend the standing up and sitting down motions and their respective influence on the accelerometer. The angle computation takes advantage of these obvious relations. Using the GUI, a variety of other signals can be displayed in this plotter as well.

Additional plotter window showing autocorrelations of different sensor packages.

The presented KiRAT setup can be used for real-time analysis of human motion only relying on a small number of IMUs. In addition to the automatic classification, it presents helpful graphic representations of calculated features and signals to be interpreted by the user.