Managing pain is a fundamental care need, of which inadequate management may have deleterious short- and long-term consequences to patients. In adult patients, pain is commonly assessed by an individual reporting on the numerical rating scale. However, self-reporting of pain may not be possible for patients with cognitive dysfunction, or those who are burdened by medical conditions e.g., stroke, degenerative diseases, damaged vocal cords, critically ill patients etc. Similarly, Pre-verbal children, and those with profound cognitive impairment are unable to self-report and describe their pain level, making pain assessment more challenging. Difficulty in self reporting in pain assessment is a significant clinical problem that affects clinical decision. Therefore, automatic pain detection is important for patient care. In this paper, we propose a Spatial-temporal Deep Neural Network (DNN) to detect the pain level of a patient from the facial landmarks. The proposed system firstly detect face from a frame, and then extract facial landmarks from face detected, after that a series of frames containing facial landmarks are fed into the 3D convolutional networks for high level spatial and temporal feature extraction, and finally classify into 3 categories: no pain, mild pain and serve pain. The proposed system achieved 97% accuracy after being trained and tested using UNBC-McMaster Pain Archive Database.
|