skip to main content
research-article

Multiclass autoencoder-based active learning for sensor-based human activity recognition

Published: 27 February 2024 Publication History

Abstract

Leveraging the enormous amounts of real-world data collected through Internet of Things (IoT) technologies, human activity recognition (HAR) has become a crucial component of numerous human-centric applications, with the aim of enhancing the quality of human life. While the recent advancements in deep learning have significantly improved HAR, the process of labeling data continues to remain a significant challenge due to the substantial costs associated with human annotation for supervised model training. Active learning (AL) addresses this issue by strategically selecting informative samples for labeling during model training, thereby enhancing model performance. Although numerous approaches have been proposed for sample selection, which consider aspects of uncertainty and representation, the difficulties in estimating uncertainty and exploiting distribution of high-dimensional data still pose a major issue. Our proposed deep learning-based active learning algorithm, called Multiclass Autoencoder-based Active Learning (MAAL), learns latent representation leveraging the capacity of Deep Support Vector Data Description (Deep SVDD). With the multiclass autoencoder which learns the normal characteristics of each activity class in the latent space, MAAL provides an informative sample selection for model training by establishing a link between the HAR model and the selection model. We evaluate our proposed MAAL using two publicly available datasets. The performance results demonstrate the improvements across the overall active learning rounds, achieving enhancements up to 3.23% accuracy and 3.67% in the F 1 score. Furthermore, numerical results and analysis of sample selection are presented to validate the effectiveness of the proposed MAAL compared to the alternative comparison methods.

Graphical abstract

Display Omitted

Highlights

We present a deep learning-based active learning for an efficiently labeled dataset.
The proposed method extends the autoencoder with SVDD in a multiclass scheme.
We evaluate the proposed active learning method in the scenario of HAR applications.
Experimental results show improvements in the performance of HAR with a smaller dataset.
Selection of informative samples which is difficult for model to predict is validated.

References

[1]
Lee I., Lee K., The Internet of Things (IoT): Applications, investments, and challenges for enterprises, Bus. Horiz. 58 (4) (2015) 431–440.
[2]
Lee G.H., Han J., Choi J.K., MPdist-based missing data imputation for supporting big data analyses in IoT-based applications, Future Gener. Comput. Syst. 125 (2021) 421–432.
[3]
Han J., Lee G.H., Lee J., Kim T.Y., Choi J.K., A novel deep-learning-based robust data transmission period control framework in IoT edge computing system, IEEE Internet Things J. 9 (23) (2022) 23486–23505.
[4]
Lara O.D., Labrador M.A., A survey on human activity recognition using wearable sensors, IEEE Commun. Surv. Tutor. 15 (3) (2013) 1192–1209.
[5]
Keshavarzian A., Sharifian S., Seyedin S., Modified deep residual network architecture deployed on serverless framework of IoT platform based on human activity recognition application, Future Gener. Comput. Syst. 101 (2019) 14–28.
[6]
Lee G.H., Park H., Jang J.W., Han J., Choi J.K., PPO-based autonomous transmission period control system in IoT edge computing, IEEE Internet Things J. (2023).
[7]
bin Abdullah M.F.A., Negara A.F.P., Sayeed M.S., Choi D.-J., Muthu K.S., Classification algorithms in human activity recognition using smartphones, Int. J. Biomed. Biol. Eng. 6 (8) (2012) 362–369.
[8]
Ali S., El-Sappagh S., Ali F., Imran M., Abuhmed T., Multitask deep learning for cost-effective prediction of patient’s length of stay and readmission state using multimodal physical activity sensory data, IEEE J. Biomed. Health Inf. 26 (12) (2022) 5793–5804.
[9]
Bianchi V., Bassoli M., Lombardo G., Fornacciari P., Mordonini M., De Munari I., IoT wearable sensor and deep learning: An integrated approach for personalized human activity recognition in a smart home environment, IEEE Internet Things J. 6 (5) (2019) 8553–8562.
[10]
Zhou X., Liang W., Kevin I., Wang K., Wang H., Yang L.T., Jin Q., Deep-learning-enhanced human activity recognition for internet of healthcare things, IEEE Internet Things J. 7 (7) (2020) 6429–6438.
[11]
Atzori L., Iera A., Morabito G., The Internet of Things: A survey, Comput. Netw. 54 (15) (2010) 2787–2805.
[12]
Wan J., Li M., O’Grady M.J., Gu X., Alawlaqi M.A., O’Hare G.M., Time-bounded activity recognition for ambient assisted living, IEEE Trans. Emerg. Top. Comput. 9 (1) (2021) 471–483.
[13]
Ehatisham-ul Haq M., Azam M.A., Opportunistic sensing for inferring in-the-wild human contexts based on activity pattern recognition using smart computing, Future Gener. Comput. Syst. 106 (2020) 374–392.
[14]
Wan S., Qi L., Xu X., Tong C., Gu Z., Deep learning models for real-time human activity recognition with smartphones, Mob. Netw. Appl. 25 (2) (2020) 743–755.
[15]
Mimouna A., Ben Khalifa A., A survey of human action recognition using accelerometer data, Adv. Sens. Biomed. Appl. (2021) 1–32.
[16]
Chen K., Zhang D., Yao L., Guo B., Yu Z., Liu Y., Deep learning for sensor-based human activity recognition: Overview, challenges, and opportunities, ACM Comput. Surv. 54 (4) (2021) 1–40.
[17]
Nweke H.F., Teh Y.W., Al-garadi M.A., Alo U.R., Deep learning algorithms for human activity recognition using mobile and wearable sensor networks: State of the art and research challenges, Expert Syst. Appl. 105 (2018) 233–261.
[18]
LeCun Y., Bengio Y., Hinton G., Deep learning, Nature 521 (7553) (2015) 436–444.
[19]
Pouyanfar S., Sadiq S., Yan Y., Tian H., Tao Y., Reyes M.P., Shyu M.-L., Chen S.-C., Iyengar S.S., A survey on deep learning: Algorithms, techniques, and applications, ACM Comput. Surv. 51 (5) (2018) 1–36.
[20]
Wu X., Xiao L., Sun Y., Zhang J., Ma T., He L., A survey of human-in-the-loop for machine learning, Future Gener. Comput. Syst. (2022).
[21]
Basak H., Kundu R., Singh P.K., Ijaz M.F., Woźniak M., Sarkar R., A union of deep learning and swarm-based optimization for 3D human action recognition, Sci. Rep. 12 (1) (2022) 5494.
[22]
Minh Dang L., Min K., Wang H., Jalil Piran M., Hee Lee C., Moon H., Sensor-based and vision-based human activity recognition: A comprehensive survey, Pattern Recognit. 108 (2020).
[23]
S. Münzner, P. Schmidt, A. Reiss, M. Hanselmann, R. Stiefelhagen, R. Dürichen, CNN-based sensor fusion techniques for multimodal human activity recognition, in: Proceedings of the 2017 ACM International Symposium on Wearable Computers, 2017, pp. 158–165.
[24]
Singh S.P., Sharma M.K., Lay-Ekuakille A., Gangwar D., Gupta S., Deep convlstm with self-attention for human activity decoding using wearable sensors, IEEE Sens. J. 21 (6) (2021) 8575–8582,.
[25]
Zhang H., Xiao Z., Wang J., Li F., Szczerbicki E., A novel IoT-perceptive Human Activity Recognition (HAR) approach using multihead convolutional attention, IEEE Internet Things J. 7 (2) (2020) 1072–1080,.
[26]
Park H., Kim N., Lee G.H., Choi J.K., MultiCNN-FilterLSTM: Resource-efficient sensor-based human activity recognition in IoT applications, Future Gener. Comput. Syst. 139 (2023) 196–209.
[27]
Bi H., Perello-Nieto M., Santos-Rodriguez R., Flach P., Human activity recognition based on dynamic active learning, IEEE J. Biomed. Health Inf. 25 (4) (2020) 922–934.
[28]
Gao M., Zhang Z., Yu G., Arık S.Ö., Davis L.S., Pfister T., Consistency-based semi-supervised active learning: Towards minimizing labeling cost, in: Computer Vision–ECCV 2020: 16th European Conference, Glasgow, UK, August 23–28, 2020, Proceedings, Part X 16, Springer, 2020, pp. 510–526.
[29]
Settles B., Active Learning Literature Survey, University of Wisconsin-Madison Department of Computer Sciences, 2009.
[30]
Han J., Lee G.H., Park S., Lee J., Choi J.K., A multivariate-time-series-prediction-based adaptive data transmission period control algorithm for IoT networks, IEEE Internet Things J. 9 (1) (2021) 419–436.
[31]
Adaimi R., Thomaz E., Leveraging active learning and conditional mutual information to minimize data annotation in human activity recognition, Proc. ACM Interact. Mob. Wearable Ubiquitous Technol. 3 (3) (2019) 1–23.
[32]
Ren P., Xiao Y., Chang X., Huang P.-Y., Li Z., Gupta B.B., Chen X., Wang X., A survey of deep active learning, ACM Comput. Surv. 54 (9) (2021) 1–40.
[33]
Budd S., Robinson E.C., Kainz B., A survey on active learning and human-in-the-loop deep learning for medical image analysis, Med. Image Anal. 71 (2021).
[34]
S. Sinha, S. Ebrahimi, T. Darrell, Variational adversarial active learning, in: Proceedings of the IEEE/CVF International Conference on Computer Vision, 2019, pp. 5972–5981.
[35]
O. Sener, S. Savarese, Active Learning for Convolutional Neural Networks: A Core-Set Approach, in: International Conference on Learning Representations, 2018.
[36]
K. Kim, D. Park, K.I. Kim, S.Y. Chun, Task-aware variational adversarial active learning, in: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2021, pp. 8166–8175.
[37]
Platt J.C., Shawe-Taylor J., Smola A.J., Williamson R.C., et al., Estimating the support of a high-dimensional distribution, Neural Comput. 13 (7) (2001) 1443–1471.
[38]
D. Gudovskiy, A. Hodgkinson, T. Yamaguchi, S. Tsukizawa, Deep active learning for biased datasets via fisher kernel self-supervision, in: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2020, pp. 9041–9049.
[39]
D.D. Lewis, W.A. Gale, A Sequential Algorithm for Training Text Classifiers, in: Proceedings of the 17th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval, SIGIR ’94, 1994, pp. 3–12.
[40]
Scheffer T., Decomain C., Wrobel S., Active hidden Markov models for information extraction, in: Hoffmann F., Hand D.J., Adams N., Fisher D., Guimaraes G. (Eds.), Advances in Intelligent Data Analysis, Springer Berlin Heidelberg, Berlin, Heidelberg, 2001, pp. 309–318.
[41]
B. Settles, M. Craven, An analysis of active learning strategies for sequence labeling tasks, in: Proceedings of the 2008 Conference on Empirical Methods in Natural Language Processing, 2008, pp. 1070–1079.
[42]
H.S. Seung, M. Opper, H. Sompolinsky, Query by committee, in: Proceedings of the Fifth Annual Workshop on Computational Learning Theory, 1992, pp. 287–294.
[43]
Dagan I., Engelson S.P., Committee-based sampling for training probabilistic classifiers, in: Machine Learning Proceedings 1995, Elsevier, 1995, pp. 150–157.
[44]
Shannon C.E., A mathematical theory of communication, Bell Syst. Tech. J. 27 (3) (1948) 379–423.
[45]
H.T. Nguyen, A. Smeulders, Active learning using pre-clustering, in: Proceedings of the Twenty-First International Conference on Machine Learning, 2004, p. 79.
[46]
Tong S., Active Learning: Theory and Applications, Stanford University, 2001.
[47]
Abdar M., Pourpanah F., Hussain S., Rezazadegan D., Liu L., Ghavamzadeh M., Fieguth P., Cao X., Khosravi A., Acharya U.R., et al., A review of uncertainty quantification in deep learning: Techniques, applications and challenges, Inf. Fusion 76 (2021) 243–297.
[48]
Helton J.C., Treatment of uncertainty in performance assessments for complex systems, Risk Anal. 14 (4) (1994) 483–511.
[49]
Gal Y., Islam R., Ghahramani Z., Deep bayesian active learning with image data, in: International Conference on Machine Learning, PMLR, 2017, pp. 1183–1192.
[50]
W.H. Beluch, T. Genewein, A. Nürnberger, J.M. Köhler, The power of ensembles for active learning in image classification, in: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2018, pp. 9368–9377.
[51]
Kirsch A., Van Amersfoort J., Gal Y., Batchbald: Efficient and diverse batch acquisition for deep bayesian active learning, Adv. Neural Inf. Process. Syst. 32 (2019).
[52]
Gal Y., Ghahramani Z., Dropout as a bayesian approximation: Representing model uncertainty in deep learning, in: International Conference on Machine Learning, PMLR, 2016, pp. 1050–1059.
[53]
Houlsby N., Huszár F., Ghahramani Z., Lengyel M., Bayesian active learning for classification and preference learning, 2011, arXiv preprint arXiv:1112.5745.
[54]
D. Yoo, I.S. Kweon, Learning loss for active learning, in: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2019, pp. 93–102.
[55]
Shui C., Zhou F., Gagné C., Wang B., Deep active learning: Unified and principled method for query and training, in: International Conference on Artificial Intelligence and Statistics, PMLR, 2020, pp. 1308–1318.
[56]
Kwapisz J.R., Weiss G.M., Moore S.A., Activity recognition using cell phone accelerometers, ACM SigKDD Explor. Newsl. 12 (2) (2011) 74–82.
[57]
Catal C., Tufekci S., Pirmit E., Kocabag G., On the use of ensemble of classifiers for accelerometer-based activity recognition, Appl. Soft Comput. 37 (2015) 1018–1022.
[58]
Janiesch C., Zschech P., Heinrich K., Machine learning and deep learning, Electron. Mark. 31 (3) (2021) 685–695.
[59]
Wang J., Chen Y., Hao S., Peng X., Hu L., Deep learning for sensor-based activity recognition: A survey, Pattern Recognit. Lett. 119 (2019) 3–11. Deep Learning for Pattern Recognition.
[60]
Ordóñez F.J., Roggen D., Deep convolutional and lstm recurrent neural networks for multimodal wearable activity recognition, Sensors 16 (1) (2016) 115.
[61]
Vaswani A., Shazeer N., Parmar N., Uszkoreit J., Jones L., Gomez A.N., Kaiser Ł., Polosukhin I., Attention is all you need, Adv. Neural Inf. Process. Syst. 30 (2017).
[62]
Hossain H.S., Khan M.A.A.H., Roy N., Active learning enabled activity recognition, Pervasive Mob. Comput. 38 (2017) 312–330.
[63]
Alemdar H., Van Kasteren T., Ersoy C., Active learning with uncertainty sampling for large scale activity recognition in smart homes, J. Ambient Intell. Smart Environ. 9 (2) (2017) 209–223.
[64]
Hossain H.S., Al Haiz Khan M.A., Roy N., DeActive: scaling activity recognition with active deep learning, Proc. ACM Interact. Mob. Wearable Ubiquitous Technol. 2 (2) (2018) 1–23.
[65]
G.K. Gudur, P. Sundaramoorthy, V. Umaashankar, Activeharnet: Towards on-device deep bayesian active learning for human activity recognition, in: The 3rd International Workshop on Deep Learning for Mobile Systems and Applications, 2019, pp. 7–12.
[66]
Akbari A., Jafari R., Personalizing activity recognition models through quantifying different types of uncertainty using wearable sensors, IEEE Trans. Biomed. Eng. 67 (9) (2020) 2530–2541.
[67]
Jayalakshmi T., Santhakumaran A., Statistical normalization and back propagation for classification, Int. J. Comput. Theory Eng. 3 (1) (2011) 1793–8201.
[68]
Ruff L., Vandermeulen R., Goernitz N., Deecke L., Siddiqui S.A., Binder A., Müller E., Kloft M., Deep one-class classification, in: International Conference on Machine Learning, PMLR, 2018, pp. 4393–4402.
[69]
Liao H., Xu Z., Approaches to manage hesitant fuzzy linguistic information based on the cosine distance and similarity measures for HFLTSs and their application in qualitative decision making, Expert Syst. Appl. 42 (12) (2015) 5328–5336.
[70]
Metzner C., Schilling A., Traxdorf M., Tziridis K., Maier A., Schulze H., Krauss P., Classification at the accuracy limit: facing the problem of data ambiguity, Sci. Rep. 12 (1) (2022) 22121.
[71]
M. Zhang, A.A. Sawchuk, USC-HAD: A daily activity dataset for ubiquitous activity recognition using wearable sensors, in: Proceedings of the 2012 ACM Conference on Ubiquitous Computing, 2012, pp. 1036–1043.
[72]
Jordao A., Nazare A.C. Jr., Sena J., Schwartz W.R., Human activity recognition based on wearable sensor data: A standardization of the state-of-the-art, 2019, arXiv preprint arXiv:1806.05226.
[73]
Micucci D., Mobilio M., Napoletano P., UniMiB SHAR: A dataset for human activity recognition using acceleration data from smartphones, Appl. Sci. 7 (10) (2017) 1101.
[74]
Seyfioğlu M.S., Özbayoğlu A.M., Gürbüz S.Z., Deep convolutional autoencoder for radar-based classification of similar aided and unaided human activities, IEEE Trans. Aerosp. Electron. Syst. 54 (4) (2018) 1709–1723.
[75]
Citovsky G., DeSalvo G., Gentile C., Karydas L., Rajagopalan A., Rostamizadeh A., Kumar S., Batch active learning at scale, in: Ranzato M., Beygelzimer A., Dauphin Y., Liang P., Vaughan J.W. (Eds.), Advances in Neural Information Processing Systems, Vol. 34, Curran Associates, Inc., 2021, pp. 11933–11944.
[76]
Y. Li, Y. Zhang, R. Timofte, L. Van Gool, L. Yu, Y. Li, X. Li, T. Jiang, Q. Wu, M. Han, et al., NTIRE 2023 challenge on efficient super-resolution: Methods and results, in: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2023, pp. 1921–1959.
[77]
Van der Maaten L., Hinton G., Visualizing data using t-SNE, J. Mach. Learn. Res. 9 (11) (2008).
[78]
Anowar F., Sadaoui S., Selim B., Conceptual and empirical comparison of dimensionality reduction algorithms (PCA, KPCA, LDA, MDS, SVD, LLE, ISOMAP, LE, ICA, t-SNE), Comp. Sci. Rev. 40 (2021).

Cited By

View all
  • (2024)A novel outlier calendrical heterogeneity reconstruction deep learning model for electricity demand forecastingJournal of Intelligent & Fuzzy Systems: Applications in Engineering and Technology10.3233/JIFS-23521846:2(3363-3394)Online publication date: 14-Feb-2024
  • (2024)Hybrid Deep Learning Approaches for Human Activity Recognition and Postural Transitions Using Mobile Device SensorsSN Computer Science10.1007/s42979-024-03300-75:7Online publication date: 1-Oct-2024

Index Terms

  1. Multiclass autoencoder-based active learning for sensor-based human activity recognition
        Index terms have been assigned to the content through auto-classification.

        Recommendations

        Comments

        Information & Contributors

        Information

        Published In

        cover image Future Generation Computer Systems
        Future Generation Computer Systems  Volume 151, Issue C
        Feb 2024
        273 pages

        Publisher

        Elsevier Science Publishers B. V.

        Netherlands

        Publication History

        Published: 27 February 2024

        Author Tags

        1. Internet of Things
        2. Active learning
        3. Human activity recognition
        4. Deep learning
        5. Multiclass autoencoder
        6. Multivariate time series

        Qualifiers

        • Research-article

        Contributors

        Other Metrics

        Bibliometrics & Citations

        Bibliometrics

        Article Metrics

        • Downloads (Last 12 months)0
        • Downloads (Last 6 weeks)0
        Reflects downloads up to 05 Jan 2025

        Other Metrics

        Citations

        Cited By

        View all
        • (2024)A novel outlier calendrical heterogeneity reconstruction deep learning model for electricity demand forecastingJournal of Intelligent & Fuzzy Systems: Applications in Engineering and Technology10.3233/JIFS-23521846:2(3363-3394)Online publication date: 14-Feb-2024
        • (2024)Hybrid Deep Learning Approaches for Human Activity Recognition and Postural Transitions Using Mobile Device SensorsSN Computer Science10.1007/s42979-024-03300-75:7Online publication date: 1-Oct-2024

        View Options

        View options

        Media

        Figures

        Other

        Tables

        Share

        Share

        Share this Publication link

        Share on social media