skip to main content
10.1145/3666025.3699325acmconferencesArticle/Chapter ViewAbstractPublication PagessensysConference Proceedingsconference-collections
research-article
Open access

PhyMask: An Adaptive Masking Paradigm for Efficient Self-Supervised Learning in IoT

Published: 04 November 2024 Publication History

Abstract

This paper introduces PhyMask, an adaptive masking paradigm designed to enhance the efficiency and interpretability of Masked Autoencoders (MAEs) in analyzing IoT sensing signals. Different from all mainstream MAEs, which rely on random masking techniques, PhyMask employs an adaptive masking strategy that aligns with critical signal information. Its main contributions are threefold. First, PhyMask leverages the energy significance of frequency components to prioritize information-rich time-frequency regions, improving the reconstruction of original signals. Second, it includes a coherence-based masking component to identify and preserve essential temporal dynamics within the data. Finally, PhyMask integrates these components into an adaptive masking paradigm tailored to optimize the sensing context awareness within the masking configuration, focusing on the most informative parts of the data. This allows PhyMask to mask up to 96% of the input, reducing memory requirements by 14% and accelerating pre-training. Evaluations across two sensing applications, four datasets, and two real-world deployments demonstrate PhyMask's superior performance. PhyMask improves MAE accuracy by 7%, reduces pre-training data requirements by up to 75%, and enhances robustness to domain shifts and signal quality variations, making it of great value to robust and efficient intelligent IoT deployments.

References

[1]
Jürgen Altmann, Sergey Linev, and Axel Weiß. 2002. Acoustic-seismic detection and classification of military vehicles---developing tools for disarmament and peace-keeping. Applied Acoustics 63, 10 (2002), 1085--1107.
[2]
Alan Baade, Puyuan Peng, and David Harwath. 2022. Mae-ast: Masked autoencoding audio spectrogram transformer. arXiv preprint arXiv:2203.16691 (2022).
[3]
W. G. C. Bandara, Naman Patel, A. Gholami, Mehdi Nikkhah, M. Agrawal and Vishal M. Patel. 2022. AdaMAE: Adaptive Masking for Efficient Spatiotemporal Learning with Masked Autoencoders. In 2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). 14507--14517.
[4]
Wele Gedara Chaminda Bandara, Naman Patel, Ali Gholami, Mehdi Nikkhah, Motilal Agrawal, and Vishal M Patel. 2023. Adamae: Adaptive masking for efficient spatiotemporal learning with masked autoencoders. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 14507--14517.
[5]
Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan. Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, Ariel Herbert-Voss, Gretchen Krueger, Tom Henighan. Rewon Child, Aditya Ramesh, Daniel Ziegler, Jeffrey Wu, Clemens Winter. Chris Hesse, Mark Chen, Eric Sigler, Mateusz Litwin, Scott Gray, Benjamin Chess, Jack Clark, Christopher Berner, Sam McCandlish, Alec Radford, Ilya Sutskever, and Dario Amodei. 2020. Language Models are Few-Shot Learners. In Advances in Neural Information Processing Systems, H. Larochelle, M. Ranzato, R. Hadsell, M.F. Balcan, and H. Lin (Eds.), Vol. 33. Curran Associates. Inc., 1877--1901. https://proceedings.neurips.cc/paper_files/paper/2020/file/1457c0d6bfcb4967418bfb8acl42f64a-Paper.pdf
[6]
Mathilde Caron, Hugo Touvron, Ishan Misra, Hervé Jégou, Julien Mairal, Piotr Bojanowski, and Armand Joulin. 2021. Emerging properties in self-supervised vision transformers. In Proceedings of the IEEE/CVF international conference on computer vision. 9650--9660.
[7]
Haijia Chen, Wendong Zhang, Yunbo Wang, and Xiaokang Yang. 2023. Improving Masked Autoencoders by Learning Where to Mask. ArXiv (2023). https://consensus.app/papers/improving-masked-autoencoders-learning-where-mask-chen/7dce063d4078594db8b66ecb776481ea/?utm_source=chatgpt
[8]
Ting Chen, Simon Kornblith, Mohammad Norouzi, and Geoffrey Hinton. 2020. A simple framework for contrastive learning of visual representations. In International conference on machine learning. PMLR, 1597--1607.
[9]
Xinlei Chen, Saining Xie, and Kaiming He. 2021. An empirical study of training self-supervised vision transformers. In IEEE/CVF International Conference on Computer Vision (CVPR).
[10]
Shohreh Deldari, Hao Xue, Aaqib Saeed, Daniel V Smith, and Flora D Salim. 2022. COCOA: Cross Modality Contrastive Learning for Sensor Data. Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies 6, 3 (2022), 1--28.
[11]
Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018).
[12]
Zhaocai Dong, Kun Liu, Dongyu Han, Yuan Cao, and Yuanqing Xia. 2022. Reconstruction-based Multi-Scale Anomaly Detection for Cyber-Physical Systems. 2022 4th International Conference on Industrial Artificial Intelligence (LAI) (2022), 1--6.
[13]
Emadeldeen Eldele, Mohamed Ragab, Zhenghua Chen, Min Wu, Chee Keong Kwoh, Xiaoli Li, and Cuntai Guan. 2021. Time-Series Representation Learning via Temporal and Contextual Contrasting. In Proceedings of the Thirtieth International Joint Conference on Artificial Intelligence. 2352--2359.
[14]
Emadeldeen Eldele, Mohamed Ragab, Zhenghua Chen, Min Wu, Chee-Keong Kwoh, Xiaoli Li, and Cuntai Guan. 2023. Self-supervised contrastive representation learning for semi-supervised time-series classification. IEEE Transactions on Pattern Analysis and Machine Intelligence (2023).
[15]
Naoko Evans. 2010. Automated vehicle detection and classification using acoustic and seismic signals Ph. D. Dissertation. University of York.
[16]
Christoph Feichtenhofer, Yanghao Li, Kaiming He, et al. 2022. Masked autoencoders as spatiotemporal learners. Advances in neural information processing systems 35 (2022), 35946--35958.
[17]
Ripul Ghosh, Aparna Akula, Satish Kumar, and HK Sardana. 2015. Time-frequency analysis based robust vehicle detection using seismic sensor. Journal of Sound and Vibration 346 (2015), 424--434.
[18]
Yuan Gong, Andrew Rouditchenko, Alexander H. Liu, David Harwath, Leonid Karlinsky, Hilde Kuehne, and James R. Glass. 2023. Contrastive Audio-Visual Masked Autoencoder. In The Eleventh International Conference on Learning Representations. https://openreview.net/forum?id=QPtMRyk5rb
[19]
Jean-Bastien Grill, Florian Strub, Florent Altché, Corentin Tallec, Pierre Richemond, Elena Buchatskaya, Carl Doersch, Bernardo Avila Pires, Zhaohan Guo, Mohammad Gheshlaghi Azar, et al. 2020. Bootstrap your own latent: A new approach to self-supervised learning. Advances in Neural Information Processing Systems 33 (2020), 21271--21284.
[20]
Agrim Gupta, Jiajun Wu, Jia Deng, and Fei-Fei Li. 2024. Siamese masked autoencoders. Advances in Neural Information Processing Systems 36 (2024).
[21]
Kaiming He, Xinlei Chen, Saining Xie, Yanghao Li, Piotr Dollár, and Ross Girshick. 2022. Masked autoencoders are scalable vision learners. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 16000--16009.
[22]
Dan Hendrycks, Mantas Mazeika, Saurav Kadavath, and Dawn Song. 2019. Using self-supervised learning can improve model robustness and uncertainty. Advances in neural information processing systems 32 (2019).
[23]
Zhiqing Hong, Zelong Li, Shuxin Zhong, Wenjun Lyu, Haotian Wang, Yi Ding, Tian He, and Desheng Zhang. 2024. CrossHAR: Generalizing Cross-dataset Human Activity Recognition via Hierarchical Self-Supervised Pretraining. Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies 8, 2 (2024), 1--26.
[24]
Po-Yao Huang, Hu Xu, Juncheng Li, Alexei Baevski, Michael Auli, Wojciech Galuba, Florian Metze, and Christoph Feichtenhofer. 2022. Masked autoencoders that listen. arXiv preprint arXiv:2207.06405 (2022).
[25]
Po-Yao Huang, Hu Xu, Juncheng Li, Alexei Baevski, Michael Auli, Wojciech Galuba, Florian Metze, and Christoph Feichtenhofer. 2022. SemMAE: Semantic-Guided Masking for Learning Masked Autoencoders. Advances in Neural Information Processing Systems 35 (2022), 28708--28720.
[26]
E. Hyun, Youngseok Jin, and Jong hun Lee. 2016. A Pedestrian Detection Scheme Using a Coherent Phase Difference Method Based on 2D Range-Doppler FMCW Radar. Sensors (Basel, Switzerland) 16 (2016).
[27]
Brian Kenji Iwana and Seiichi Uchida. 2021. Time series data augmentation for neural networks by time warping with a discriminative teacher. In 2020 25th International Conference on Pattern Recognition (ICPR). IEEE, 3558--3565.
[28]
James Joyce. 2003. Bayes' theorem. (2003).
[29]
Denizhan Kara, Tomoyoshi Kimura, Shengzhong Liu, Jinyang Li, Dongxin Liu, Tianshi Wang, Ruijie Wang, Yizhuo Chen, Yigong Hu, and Tarek Abdelzaher. 2024. FreqMAE: Frequency-Aware Masked Autoencoder for Multi-Modal IoT Sensing. In Proceedings of the ACM on Web Conference 2024. 2795--2806.
[30]
Tomoyoshi Kimura, Jinyang Li, Tianshi Wang, Denizhan Kara, Yizhuo Chen, Yigong Hu, Ruijie Wang, Maggie Wigness, Shengzhong Liu, Mani Srivastava, et al. 2024. On the Efficiency and Robustness of Vibration-based Foundation Models for IoT Sensing: A Case Study. arXiv preprint arXiv:2404.02461 (2024).
[31]
Lingjing Kong, Martin Q. Ma, Guangyi Chen, Eric P. Xing, Yuejie Chi, Louis-Philippe Morency, and Kun Zhang. 2023. Understanding Masked Autoencoders via Hierarchical Latent Variable Models. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). 7918--7928.
[32]
Gang Li, Heliang Zheng, Daqing Liu, Bing Su, and Changwen Zheng. 2022. SemMAE: Semantic-Guided Masking for Learning Masked Autoencoders. ArXiv (2022). https://consensus.app/papers/semmae-semanticguided-masking-learning-masked-li/db05155d3afd56958dd95cdf3d84e0d2/?utm_source=chatgpt
[33]
Jinyang Li, Yizhuo Chen, Tomoyoshi Kimura, Tianshi Wang, Ruijie Wang, Denizhan Kara, Yigong Hu, Li Wu, Walid A. Hanafy, Abel Souza, Prashant Shenoy, Maggie Wigness, Joydeep Bhattacharyya, Jae Kim, Guijun Wang, Greg Kimberly, Josh Eckhardt, Denis Osipychev, and Tarek Abdelzaher. 2024. Acies-OS: A Content-Centric Platform for Edge AI Twinning and Orchestration. In 2024 33rd International Conference on Computer Communications and Networks (ICCCN). Big Island, HI, 1--1.
[34]
Xiang Li, Wenhai Wang, Lingfeng Yang, and Jian Yang. 2022. Uniform Masking: Enabling MAE Pre-training for Pyramid-based Vision Transformers with Locality. ArXiv abs/2205.10063 (2022). https://consensus.app/papers/uniform-masking-enabling-pretraining-pyramidbased-li/c8f23e288bc05c088d6f575ec595d641/?utm_source=chatgpt
[35]
Dongxin Liu. 2022. Self-supervised learning frameworks for IoT applications. Ph. D. Dissertation.
[36]
Dongxin Liu, Tianshi Wang, Shengzhong Liu, Ruijie Wang, Shuochao Yao, and Tarek Abdelzaher. 2021. Contrastive self-supervised representation learning ing signals from the time-frequency perspective. In 2021 International Conference on Computer Communications and Networks (ICCCN). IEEE, 1--10.
[37]
Shengzhong Liu, Tomoyoshi Kimura, Dongxin Liu, Ruijie Wang, Jinyang Li, Suhas Diggavi, Mani Srivastava, and Tarek Abdelzaher. 2024. FOCAL: Contrastive learning for multimodal time-series sensing signals in factorized orthogonal latent space. Advances in Neural Information Processing Systems 36 (2024).
[38]
Wei Liu, S. Cao, Y. Chen, and S. Zu. 2016. An effective approach to attenuate random noise based on compressive sensing and curvelet transform. Journal of Geophysics and Engineering 13 (2016), 135--145.
[39]
Ze Liu, Yutong Lin, Yue Cao, Han Hu, Yixuan Wei, Zheng Zhang, Stephen Lin, and Baining Guo. 2021. Swin transformer: Hierarchical vision transformer using shifted windows. In Proceedings of the IEEE/CVF international conference on computer vision. 10012--10022.
[40]
Zhaolun Liu, K. Lu, and X. Ge. 2018. Convolutional Sparse Coding for Noise Attenuation of Seismic Data. SEG 2018 Workshop: SEG Maximizing Asset Value Through Artificial Intelligence and Machine Learning, Beijing, China, 17--19 September 2018 (2018).
[41]
Shenghuan Miao, Ling Chen, and Rong Hu. 2024. Spatial-Temporal Masked Autoencoder for Multi-Device Wearable Human Activity Recognition. Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies 7, 4 (2024), 1--25.
[42]
Shailesh Mohine, Babankumar S Bansod, Pravendra Kumar, Rakesh Bhalla, and Anshul Basra. 2020. Single acoustic sensor-based time-frequency spectrum sensing approach for land vehicle detection. IEEE Sensors Journal 20, 13 (2020), 7275--7282.
[43]
D. Moreau, B. Cazzolato, A. Zander, and C. Petersen. 2008. A Review of Virtual Sensing Algorithms for Active Noise Control. Algorithms 1 (2008), 69--99.
[44]
N. Nakata, R. Snieder, T. Tsuji, K. Larner, and T. Matsuoka. 2011. Shear wave imaging from traffic noise using seismic interferometry by cross-coherence. Geophysics 76 (2011).
[45]
Daisuke Niizumi, Daiki Takeuchi, Yasunori Ohishi, Noboru Harada, and Kunio Kashino. 2022. Masked spectrogram modeling using masked autoencoders for learning general-purpose audio representation. arXiv preprint arXiv:2204.12260 (2022).
[46]
Xiaomin Ouyang, Xian Shuai, Jiayu Zhou, Ivy Wang Shi, Zhiyuan Xie, Guoliang Xing, and Jianwei Huang. 2022. Cosmo: Contrastive Fusion Learning with Small Data for Multimodal Human Activity Recognition. In Proceedings of the 28th Annual International Conference on Mobile Computing And Networking. 324--337.
[47]
Nicolas Pielawski, Elisabeth Wetzer, Johan Öfverstedt, Jiahao Lu, Carolina Wahlby, Joakim Lindblad, and Natasa Sladoje. 2020. CoMIR: Contrastive multimodal image representation for registration. Advances in neural information processing systems 33 (2020), 18433--18444.
[48]
Petra Poklukar, Miguel Vasco, Hang Yin, Francisco S Melo, Ana Paiva, and Danica Kragic. 2022. Geometric Multimodal Contrastive Representation Learning. In International Conference on Machine Learning. 17782--17800.
[49]
Mashud Rana, Ashfaqur Rahman, and Daniel V. Smith. 2023. A Semi-supervised Approach for Activity Recognition from Indoor Trajectory Data. ArXiv abs/2301.03134 (2023).
[50]
X. Rao, Xiangsheng Zhu, M. Gao, F. Kan, and T. Zhong. 2020. Weak moving target coherent integration detection based on compressed sensing. IET International Radar Conference (IET IRC 2020) 2020 (2020), 1750--1757.
[51]
Attila Reiss and Didier Stricker. 2012. Introducing a new benchmarked dataset for activity monitoring. In 2012 16th international symposium on wearable computers. IEEE, 108--109.
[52]
Carlos Rivem-Moreno and Boris Escalante-Ramires. 1996. Seismic signal detection with time-frequency models. Proceedings of Third International Symposium on Time-Frequency and Time-Scale Analysis (TFTS-96) (1996), 345--348.
[53]
Diego Seuret-Jiménez, Eduardo Trutié-Carrero, José Manuel Nieto-Jalil, Erick Daniel García-Aquino, Lorena Díaz-González, Laura Carballo-Sigler, Daily Quintana-Fuentes, and Luis Manuel Gaggero-Sager. 2023. Feature Extraction of a Non-Stationary Seismic-Acoustic Signal Using a High-Resolution Dyadic Spectrogram. Sensors 23, 13 (2023), 6051.
[54]
Timo Sztyler and Heiner Stuckenschmidt. 2016. On-body localization of wearable devices: An investigation of position-aware activity recognition. In 2016 IEEE International Conference on Pervasive Computing and Communications (PerCom). IEEE, 1--9.
[55]
Chi Ian Tang, Ignacio Perez-Pozuelo, Dimitris Spathis, and Cecilia Mascolo. 2020. Exploring contrastive learning in human activity recognition for healthcare. arXiv preprint arXiv:2011.11542 (2020).
[56]
DW Thomas and Brian R Wilkins. 1972. The analysis of vehicle sounds for recognition. Pattern Recognition 4, 4 (1972), 379--389.
[57]
Li Tian, Yuan Cheng, and Zhibin Li. 2022. Pseudo Random Masked AutoEncoder for Self-supervised Learning. In Proceedings of the 2022 6th International Conference on Video and Image Processing.
[58]
Yijun Tian, Kaiwen Dong, Chunhui Zhang, Chuxu Zhang, and N. Chawla. 2022. Heterogeneous Graph Masked Autoencoders. ArXiv (2022). https://consensus.app/papers/graph-masked-autoencoders-tian/08183c5489f4545183671d4adee15e98/?utm_source=chatgpt
[59]
Yonglong Tian, Dilip Krishnan, and Phillip Isola. 2020. Contrastive multiview coding. In Computer Vision-ECCV 2020: 16th European Conference, Glasgow, UK, August 23--28, 2020, Proceedings, Part XI 16. Springer, 776--794.
[60]
Sana Tonekaboni, Danny Eytan, and Anna Goldenberg. 2021. Unsupervised Representation Learning for Time Series with Temporal Neighborhood Coding. In International Conference on Learning Representations. https://openreview.net/forum?id=8qDwejCuCN
[61]
Zhan Tong, Yibing Song, Jue Wang, and Limin Wang. 2022. Videomae: Masked autoencoders are data-efficient learners for self-supervised video pre-training. arXiv preprint arXiv:2203.12602 (2022).
[62]
Xiao Wang and Guo-Jun Qi. 2022. Contrastive learning with stronger augmentations. IEEE Transactions on Pattern Analysis and Machine Intelligence (2022).
[63]
Roy E. White. 1984. Signal and noise estimation from seismic reflection data using spectral coherence methods. Proc. IEEE 72 (1984), 1340--1356.
[64]
Alex Wilf, Syeda Akter, Leena Mathur, Paul Liang, Sheryl Mathew, Mengrou Shou, Eric Nyberg, and Louis-Philippe Morency. 2023. Difference-Masking: Choosing What to Mask in Continued Pretraining. In Findings of the Association for Computational Linguistics: EMNLP 2023. 13222--13234.
[65]
Hongwei Wu and Jerry M Mendel. 2007. Classification of battlefield ground vehicles using acoustic features and fuzzy logic rule-based classifiers. IEEE transactions on fuzzy systems 15, 1 (2007), 56--72.
[66]
Yiwei Xia, Jun Ma, ChuYue Yu, XunHuan Ren, Boriskevich Anatoliy Antonovich, and Viktar Yurevich Tsviatkou. 2022. Recognition system of human activities based on time-frequency features of accelerometer data. In 2022 International Conference on Intelligent Systems and Computer Vision (ISCV). IEEE, 1--5.
[67]
Zhenda Xie, Zheng Zhang, Yue Cao, Yutong Lin, Jianmin Bao, Zhuliang Yao, Qi Dai, and Han Hu. 2022. Simmim: A simple framework for masked image modeling. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 9653--9663.
[68]
Huatao Xu, Pengfei Zhou, Rui Tan, Mo Li, and Guobin Shen. 2021. Limubert: Unleashing the potential of unlabeled data for imu sensing applications. In Proceedings of the 19th ACM Conference on Embedded Networked Sensor Systems. 220--233.
[69]
Yuzhe Yang, Xin Liu, Jiang Wu, Silviu Borac, Dina Katabi, Ming-Zher Poh, and Daniel McDuff. 2023. SimPer: Simple Self-Supervised Learning of Periodic Targets. In International Conference on Learning Representations. https://openreview.net/forum?id=EKpMeEV0hOo
[70]
Zhihan Yue, Yujing Wang, Juanyong Duan, Tianmeng Yang, Congrui Huang, Yunhai Tong, and Bixiong Xu. 2022. Ts2vec: Towards universal representation of time series. In Proceedings of the AAAI Conference on Artificial Intelligence, Vol. 36. 8980--8987.
[71]
Qi Zhang, Yifei Wang, and Yisen Wang. 2022. How mask matters: Towards theoretical understandings of masked autoencoders. Advances in Neural Information Processing Systems 35 (2022), 27127--27139.
[72]
Xiang Zhang, Ziyuan Zhao, Theodoros Tsiligkaridis, and Marinka Zitnik. 2022. Self-Supervised Contrastive Pre-Training For Time Series via Time-Frequency Consistency. In Proceedings of Neural Information Processing Systems, NeurIPS.
[73]
Yue Zhang, Zhizhang Hu, Uri Berger, and Shijia Pan. 2023. CMA: Cross-Modal Association Between Wearable and Structural Vibration Signal Segments for Indoor Occupant Sensing. In Proceedings of the 22nd International Conference on Information Processing in Sensor Networks. 96--109.
[74]
Mohammadreza Zolfaghari, Yi Zhu, Peter Gehler, and Thomas Brox. 2021. Cross-clr: Cross-modal contrastive learning for multi-modal video representations. In Proceedings of the IEEE/CVF International Conference on Computer Vision. 1450--1459.

Index Terms

  1. PhyMask: An Adaptive Masking Paradigm for Efficient Self-Supervised Learning in IoT

    Recommendations

    Comments

    Information & Contributors

    Information

    Published In

    cover image ACM Conferences
    SenSys '24: Proceedings of the 22nd ACM Conference on Embedded Networked Sensor Systems
    November 2024
    950 pages
    ISBN:9798400706974
    DOI:10.1145/3666025
    Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

    Sponsors

    Publisher

    Association for Computing Machinery

    New York, NY, United States

    Publication History

    Published: 04 November 2024

    Check for updates

    Author Tags

    1. multimodal sensing
    2. self-supervised learning
    3. internet of things

    Qualifiers

    • Research-article

    Conference

    Acceptance Rates

    Overall Acceptance Rate 174 of 867 submissions, 20%

    Contributors

    Other Metrics

    Bibliometrics & Citations

    Bibliometrics

    Article Metrics

    • 0
      Total Citations
    • 342
      Total Downloads
    • Downloads (Last 12 months)342
    • Downloads (Last 6 weeks)144
    Reflects downloads up to 05 Jan 2025

    Other Metrics

    Citations

    View Options

    View options

    PDF

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader

    Login options

    Media

    Figures

    Other

    Tables

    Share

    Share

    Share this Publication link

    Share on social media