Academia.eduAcademia.edu

Outline

Deep Learning Human Activity Recognition

2019, CEUR workshop proceedings

Abstract

Human activity recognition is an area of interest in various domains such as elderly and health care, smart-buildings and surveillance, with multiple approaches to solving the problem accurately and efficiently. For many years hand-crafted features were manually extracted from raw data signals, and activities were classified using support vector machines and hidden Markov models. To further improve on this method and to extract relevant features in an automated fashion, deep learning methods have been used. The most common of these methods are Long Short-Term Memory models (LSTM), which can take the sequential nature of the data into consideration and outperform existing techniques, but which have two main pitfalls; longer training times and loss of distant pass memory. A relevantly new type of network, the Temporal Convolutional Network (TCN), overcomes these pitfalls, as it takes significantly less time to train than LSTMs and also has a greater ability to capture more of the long term dependencies than LSTMs. When paired with a Convolutional Auto-Encoder (CAE) to remove noise and reduce the complexity of the problem, our results show that both models perform equally well, achieving state-of-the-art results, but when tested for robustness on temporal data the TCN outperforms the LSTM. The results also show, for industry applications, the TCN can accurately be used for fall detection or similar events within a smart building environment.

References (24)

  1. Abdelnasser, H., Youssef, M., Harras, K.A.: Wigest: A ubiquitous wifi-based ges- ture recognition system. In: IEEE INFOCOM. pp. 1472-1480 (2015)
  2. Bai, S., Kolter, J.Z., Koltun, V.: An empirical evaluation of generic convolutional and recurrent networks for sequence modeling. CoRR abs/1803.01271 (2018)
  3. Cippitelli, E., Fioranelli, F., Gambi, E., Spinsante, S.: Radar and rgb-depth sensors for fall detection: A review. IEEE Sensors Journal 17(12), 3585-3604 (2017)
  4. Dollár, P., Rabaud, V., Cottrell, G., Belongie, S.: Behavior recognition via sparse spatio-temporal features. VS-PETS Beijing, China (2005)
  5. Halperin, D., Hu, W., Sheth, A., Wetherall, D.: Tool release: Gathering 802.11 n traces with channel state information. ACM SIGCOMM Computer Communica- tion Review 41(1), 53-53 (2011)
  6. Hou, J., Wang, G., Chen, X., Xue, J.H., Zhu, R., Yang, H.: Spatial-temporal at- tention res-tcn for skeleton-based dynamic hand gesture recognition. In: Proc. of the ECCV. pp. 0-0 (2018)
  7. Karpathy, A., Toderici, G., Shetty, S., Leung, T., Sukthankar, R., Fei-Fei, L.: Large- scale video classification with convolutional neural networks. In: Proc. of the IEEE conference on CVPR. pp. 1725-1732 (2014)
  8. Laptev, I.: On space-time interest points. IJCV 64(2-3), 107-123 (2005)
  9. Lea, C., Flynn, M.D., Vidal, R., Reiter, A., Hager, G.D.: Temporal convolutional networks for action segmentation and detection. In: Proc. of the IEEE Conference on CVPR. pp. 156-165 (2017)
  10. LeCun, Y., Bengio, Y., Hinton, G.: Deep learning. nature 521(7553), 436 (2015)
  11. Li, H., Ota, K., Dong, M., Guo, M.: Learning human activities through wi-fi channel state information with multiple access points. IEEE Com Mag 56(5) (2018)
  12. Morales, J., Akopian, D.: Physical activity recognition by smartphones, a survey. Biocybernetics and Biomedical Engineering 37(3), 388-400 (2017)
  13. Nair, N., Thomas, C., Jayagopi, D.B.: Human activity recognition using temporal convolutional network. In: Proc. of the 5th iWOAR. p. 17 (2018)
  14. Pu, Q., Gupta, S., Gollakota, S., Patel, S.: Whole-home gesture recognition using wireless signals. In: Proc. of the 19th ACM MobiCom. pp. 27-38 (2013)
  15. Qolomany, B., Al-Fuqaha, A., Benhaddou, D., Gupta, A.: Role of deep lstm neural networks and wi-fi networks in support of occupancy prediction in smart buildings. In: IEEE 19th Int. Conf. on HPCC; IEEE 15th Int. Conf. on SmartCity; IEEE 3rd Int. Conf. on DSS. pp. 50-57 (2017)
  16. Ronao, C.A., Cho, S.B.: Human activity recognition with smartphone sensors using deep learning neural networks. Expert systems with apls 59, 235-244 (2016)
  17. Trȃscȃu, M., Nan, M., Florea, A.M.: Spatio-temporal features in action recognition using 3d skeletal joints. Sensors 19(2), 423 (2019)
  18. Wang, F., Gong, W., Liu, J., Wu, K.: Channel selective activity recognition with wifi: A deep learning approach exploring wideband information. IEEE Transactions on Network Science and Engineering (2018)
  19. Wang, G., Zou, Y., Zhou, Z., Wu, K., Ni, L.M.: We can hear you with wi-fi! IEEE Transactions on Mobile Computing 15(11), 2907-2920 (2016)
  20. Wang, Y., Wu, K., Ni, L.M.: Wifall: Device-free fall detection by wireless networks. IEEE Transactions on Mobile Computing 16(2), 581-594 (2016)
  21. Yatani, K., Truong, K.N.: Bodyscope: a wearable acoustic sensor for activity recog- nition. In: Proc. of ACM Conference on Ubiquitous Computing. pp. 341-350 (2012)
  22. Yousefi, S., Narui, H., Dayal, S., Ermon, S., Valaee, S.: A survey on behavior recognition using wifi channel state information. IEEE Comms Mag. 55(10) (2017)
  23. Yue-Hei Ng, J., Hausknecht, M., Vijayanarasimhan, S., Vinyals, O., Monga, R., Toderici, G.: Beyond short snippets: Deep networks for video classification. In: Proc. of the IEEE conference on CVPR. pp. 4694-4702 (2015)
  24. Zou, H., Zhou, Y., Yang, J., Spanos, C.J.: Towards occupant activity driven smart buildings via wifi-enabled iot devices and deep learning. Energy and Buildings 177, 12-22 (2018)