A low-cost visual sensor network for elderly care

A low-resolution visual sensor network enables monitoring of elderly people's health and safety at home, postponing institutionalized healthcare.
23 December 2014
Francis Deboeverie, Richard Kleihorst, Wilfried Philips, Jan Hanca and Adrian Munteanu

As the population ages, there is growing incidence of impaired mobility and cognitive disorders such as Alzheimer's disease. For elderly people with these conditions it is often necessary to move to care facilities, where round-the-clock assistance is available. Many dread this solution, and furthermore it comes at a huge economic cost to society.

Technology can provide increased levels of safety for elderly people living at home, postponing the move to institutional care settings, or even eliminating it completely. Simple devices, such as wearable panic buttons, are cheap and useful but fail when patients forget to wear them or how to use them, or become unconscious. Multisensor networks in the living environment, such as pressure sensors (on a bed, chair, or toilet), door and window opening sensors, and motion sensors,1 can provide basic location data, but cameras would allow even richer safety and behavioral monitoring. Camera images can localize a person, analyzing their pose2 and behavior in detail. However, such systems are expensive, not so much because of the actual cameras, but because of the associated infrastructure (networks, cabling, and computers) and installation costs.

Purchase SPIE Field Guide to Image ProcessingOur solution is a sensor network based on very low-resolution (900-pixel) visual sensors and low-bit-rate wireless communication. Distributed processing algorithms running on microcontrollers and microcomputers analyze changes in motion and behavior patterns over time and detect possible emergency situations. At that point, family members or caregivers can activate (low-quality) video streaming to assess the situation. The low resolution of the sensor poses significant technical challenges, but it enables a cheap, battery-powered, wireless system.

A key component of our approach is analysis of location, motion and pose. Figure 1 shows the distributed processing pipeline. First, a microcontroller in each sensor performs preprocessing, including devignetting (correcting for lower brightness at the periphery of the image), automatic gain control, and noise reduction, and then runs video analysis algorithms to separate the silhouettes of moving persons from the static background.3,4 Achieving proper handling of the low resolution, noise, and the poor and quickly changing lighting conditions is particularly challenging. A single-board, low-cost computer runs a robust people tracker5 based on recursive maximum likelihood principles. The tracker requires only the bounding boxes of the silhouettes as seen by each camera, and therefore avoids data communication in the absence of changes, prolonging battery life.

Figure 1. Distributed processing pipeline for automatic behavior analysis.

To detect behavioral changes over time, we first cluster person trajectories in time and space (see Figure 2). Specifically, our system automatically detects ‘hot spots’—frequently occupied locations—and computes mobility statistics for these, and for the tracks between them. Figure 3 shows the changes in activity level of an elderly person recovering from a stroke over 40 days. In this case, activity levels decrease in the sitting area, but increase in the kitchen,6,7 indicating improved mobility. Figure 4 shows the evolution of other behavior-related statistics.8

Figure 2. Person trajectories in the kitchen and living areas.

Figure 3. Changes in activity level of an elderly person recovering from a stroke.

Figure 4. Mobility statistics per day, including the time of getting up, bed time, walking distance, time duration of staying at the couch, and the number of specific tracks.

Pose and motion analysis may indicate possible emergencies such as falls or wandering behavior, but to reduce the cost of false alarms any emergency response should adopt a cascaded approach. For example, as a first step, family members or caregivers may attempt to contact the person by phone. If there is no response, they can activate video transmission to assess the situation.

We designed the system's video codec (for coding and decoding) specifically for extremely low-resolution data. It allows high-quality but very low-bandwidth wireless transmission, and can still be used on microcontrollers, despite their limited computing power. The main functional units implement only the most probable coding options9, meaning only one intra-frame prediction mode and only one data block size. Moreover, the video coder has reduced computational needs because it avoids motion estimation (computing the extent to which objects move in the picture), the most time-consuming operation in traditional codecs.10 Hence, our system avoids mode decision mechanisms, predicting the inter-coded frames using the corresponding blocks in the preceding pictures.

We ensured error-resilient transmission using a row-column bit interleaver—which spreads transmission losses over multiple packets—and systematic forward-error-correction codes that protect each chunk of video data. We adjusted the protection level to the network properties by randomly omitting a number of parity bits generated by the error-correction coder. This reduces the amount of memory used for storing the generator matrices (of which the rows form the basis for the linear code) at the sensor node.10

Despite significant technical challenges, low-resolution visual sensor networks are a viable solution to monitor people's behavior at home. They provide sufficiently rich information to detect health-related behavioral changes and even enable low-quality video transmission to assess emergency situations. They can provide this functionality without cabling, significantly reducing installation cost.

Our future work will focus on rudimentary semantic activity classification, using temporal probability models.11For example, frequent motion within the kitchen area followed by a period of sitting may indicate cooking followed by eating.

This system was developed in the iMinds research project 'Little Sister: Low-cost monitoring for care and retail'12 and is currently being evaluated in the Ambient Assisted Living Joint Programme project SONOPA (Social Networks for Older adults to Promote an Active life).13

Francis Deboeverie, Richard Kleihorst, Wilfried Philips
Image Processing and Interpretation
Ghent University/iMinds
Ghent, Belgium

Francis Deboeverie received a Master of Science in electronics and ICT engineering technology in 2007, and a PhD in engineering in 2014. He is currently a postdoctoral researcher.

Richard Kleihorst received a PhD from Delft University in 1994. He worked at Philips, NXP, and VITO, and is a guest professor at Ghent University. His main research topic is smart camera networks, which form the basis of two companies he started. He founded the IEEE/Association for Computing Machinery International Conference on Distributed Smart Cameras and the Workshop on Architecture of Smart Camera.

Wilfried Philips is a senior professor and heads the Image Processing and Interpretation research group. His main research interests are image and video restoration and multi-camera computer vision. He has received several scientific awards, including the Alumni award of the Belgian-American Educational Foundation.

Jan Hanca, Adrian Munteanu
Department of Electronics and Informatics
Vrije Universiteit Brussel/iMinds
Brussels, Belgium

Jan Hanca received MSc and engineering degrees in electronics and telecommunications from Poznan University of Technology, Poland, in 2010. He is currently a PhD researcher, benefiting from a grant from the Flemish agency Innovation by Science and Technology.

Adrian Munteanu is a professor in the Electronics and Informatics Department. His area of expertise is data compression, on which he has published more than 200 scientific articles, patent applications, and contributions to standards. He currently serves as associate editor for IEEE Transactions on Multimedia.

1. http://beclose.com/press-031810.aspx A wireless sensor-based system for elderly people and their caregivers. Accessed 24 November 2014.
2. http://www-sop.inria.fr/members/Francois.Bremond/topicsText/gerhomeProject.html The GER'HOME project: multi-sensor analysis for everyday elderly activity monitoring. Accessed 24 November 2014.
3. B. B. Nyan, S. Grünwedel, P. Van Hese, J. Niño Castañeda, D. Van Haerenborgh, D. Van Cauwelaert, P. Veelaert, W. Philips, PhD Forum: Illumination-robust foreground detection for multi-camera occupancy mapping, Proc. 6th ACM/IEEE Int'l Conf. Distr. Smart Cam., p. 1-2, 2012.
4. F. Deboeverie, G. Allebosch, D. Van Haerenborgh, P. Veelaert, W. Philips, Edge-based foreground detection with higher order derivative local binary patterns for low-resolution video processing, Proc. 9th Int'l Conf. Comp. Vis. Theory Appl., p. 339-346, 2014.
5. B. B. Nyan, F. Deboeverie, M. El Dib, J. Guan, X. Xie, J. Niño Castañeda, D. Van Haerenborgh, et al., Human mobility monitoring in very low-resolution visual sensor network, Sensors 14, p. 20800-20824, 2014.
6. M. Eldib, B. B. Nyan, F. Deboeverie, J. Niño Castañeda, J. Guan, S. Van de Velde, H. Steendam, H. Aghajan, W. Philips, A low resolution multi-camera system for person tracking, Proc. IEEE Int'l Conf. Image Process., p. 486-490, 2014.
7. M. Eldib, B. B. Nyan, F. Deboeverie, X. Xie, H. Aghajan, W. Philips, Behavior analysis for aging-in-place using similarity heatmaps, Proc. 8th ACM/IEEE Int'l Conf. Distr. Smart Cam., 2014.
8. X. Xie, F. Deboeverie, M. Eldib, W. Philips, H. Aghajan, PhD Forum: Analyzing behaviors patterns of the elderly from low-precision trajectories, Proc. 8th ACM/IEEE Int'l Conf. Distr. Smart Cam., 2014.
9. W. Chen, F. Verbist, N. Deligiannis, P. Schelkens, A. Munteanu, Efficient intra-frame video coding for low resolution wireless visual sensors, Proc. 18th ACM/IEEE Int'l Conf. Dig. Sig. Process., p. 1-6, 2013.
10. J. Hanca, G. Braeckman, A. Munteanu, W. Philips, Lightweight real-time error-resilient encoding of visual sensor data, J. Real-Time Image Process. , p. 1-15, 2014.
11. T. van Kasteren, Activity recognition for health monitoring elderly using temporal probabilistic models, PhD thesis, University of Amsterdam, 2011.
12. http://www.iminds.be/en/research/overview-projects/p/detail/littlesister Little Sister: a low-cost sensor-based monitoring system for care and retail. Accessed 27 November 2014.
13. http://www.sonopa.eu/ SONOPA: a project promoting the use of social networks for older adults. Accessed 27 November 2014.
Sign in to read the full article
Create a free SPIE account to get access to
premium articles and original research
Forgot your username?