Presentation is loading. Please wait.

Presentation is loading. Please wait.

Loconsole Claudio, Ph.D. Discussion, Scuola Superiore Sant’Anna – 16 th November 2012 Relation Advances in human-machine interaction for Upper Limb Rehabilitation.

Similar presentations


Presentation on theme: "Loconsole Claudio, Ph.D. Discussion, Scuola Superiore Sant’Anna – 16 th November 2012 Relation Advances in human-machine interaction for Upper Limb Rehabilitation."— Presentation transcript:

1 Loconsole Claudio, Ph.D. Discussion, Scuola Superiore Sant’Anna – 16 th November 2012 Relation Advances in human-machine interaction for Upper Limb Rehabilitation and Basic Life Support training Claudio Loconsole PERCRO Laboratory, Scuola Superiore Sant’Anna, Pisa- Italy 16 th November 2012

2 Loconsole Claudio, Ph.D. Discussion, Scuola Superiore Sant’Anna – 16 th November 2012 Ph.D. Thesis Overview Human-Machine Interaction Human-Robot Interaction Methods for perceiving humans and environment Methods for motion planning 1. Upper limb rehabilitation 2. BLS training 3. Other HMI applications

3 Loconsole Claudio, Ph.D. Discussion, Scuola Superiore Sant’Anna – 16 th November 2012 Upper limb neurorehabilitation: main contributions of the research work & scientific claims 1.Provide a new upper limb neurorehabilitation therapy scenario 2.Involvement of higher cognitive functions in the therapy, such as visuo-motor coordination, that might be beneficial in neuromotor recovery from stroke a)Gaze b)Brain activity c)Interaction with real 3D world (using smart systems) d)Provide a reaching task system that can follow also moving targets 3.Provide a trajectory planning strategy that can mimicking the human behavior during reaching task 4.Possibility for therapists to exploit our system to propose new therapy approaches, as well as, easily interact with the robot and overcome the therapist fatigue and the lack of repeatability, the characteristic drawbacks of the manual therapy

4 Loconsole Claudio, Ph.D. Discussion, Scuola Superiore Sant’Anna – 16 th November 2012 Motivation of the work (1/3) Hemiparesis of upper extremity represents a common impairment affecting patients after stroke American Heart Association estimates that it affects approximately 795 000 people in the U.S. each year New robotics interfaces for rehabilitation can overcome some of the major limitations in the traditional assisted training movement: orepeatability ofactors for assessing progress oavailability of skilled personnel MIT Manus, (Hogan et al., 1992) Armeo commercially available replica of the T-WREX (Sanchez et al., 2005) Armin, (Nef and Riener, 2005) L-Exos (Frisoli et al., 2005)

5 Loconsole Claudio, Ph.D. Discussion, Scuola Superiore Sant’Anna – 16 th November 2012 Motivation of the work (2/3) Typical robot-assisted training sessions: In case of planar motions, task information is usually visualized on a 2D display placed in front of the patient The use of 2D flat screen is a limitation (mandatory for 2 DOFs robots such as the MIT-Manus, both not for n-DOFs (n>=3) robots)

6 Loconsole Claudio, Ph.D. Discussion, Scuola Superiore Sant’Anna – 16 th November 2012 Motivation of the work (3/3) Visualizing the third dimension in therapies involving neurologically impaired persons is beneficial (van den Hoogen WM et al., "Visualizing the third dimension in virtual training environments for neurologically impaired persons: Beneficial or disruptive?", JNER, 05 Oct 2012) 3D task requires the adoption of stereoscopy BUT stereoscopic displays pose issues of usability to patients due to side effects of: Sickness, Fatigue, Inability by the patient to merge the left/right eye images into a single stereo one (side effects of VR usage in Rizzo and Buckwalter, “Virtual Reality and Cognitive Assessment and Reahabilitation: The State of the Art”, 1997) Moreover, it still unclear how misalignment affects the processes of recovery and cortical reorganization. 2D 3D Virtual Reality

7 Loconsole Claudio, Ph.D. Discussion, Scuola Superiore Sant’Anna – 16 th November 2012 Motivation of the work (3/3) Visualizing the third dimension in therapies involving neurologically impaired persons is beneficial (van den Hoogen WM et al., "Visualizing the third dimension in virtual training environments for neurologically impaired persons: Beneficial or disruptive?", JNER, 05 Oct 2012) 3D task requires the adoption of stereoscopy 2D 3D Virtual Reality So, why do not interact directly with 3D real world?

8 Loconsole Claudio, Ph.D. Discussion, Scuola Superiore Sant’Anna – 16 th November 2012 1. Look at the object in 3D real world space BRAVO, the new neurorehabilitation therapy scenario: the challenges

9 Loconsole Claudio, Ph.D. Discussion, Scuola Superiore Sant’Anna – 16 th November 2012 1. Look at the object in 3D real world space 2. Locate the object in the space BRAVO, the new neurorehabilitation therapy scenario: the challenges

10 Loconsole Claudio, Ph.D. Discussion, Scuola Superiore Sant’Anna – 16 th November 2012 2. Locate the object in the space 3. Think to move towards the object BRAVO, the new neurorehabilitation therapy scenario: the challenges

11 Loconsole Claudio, Ph.D. Discussion, Scuola Superiore Sant’Anna – 16 th November 2012 3. Think to move towards the object 4. Reach the object with the support of an arm exoskeleton BRAVO, the new neurorehabilitation therapy scenario: the challenges

12 Loconsole Claudio, Ph.D. Discussion, Scuola Superiore Sant’Anna – 16 th November 2012 Summarizing…: the scenario

13 Loconsole Claudio, Ph.D. Discussion, Scuola Superiore Sant’Anna – 16 th November 2012 The new gaze-BCI-driven control architecture

14 Loconsole Claudio, Ph.D. Discussion, Scuola Superiore Sant’Anna – 16 th November 2012 Light Exos workspace is approximately the 70% of that of the human arm Challenge 4. Reaching the object with the support of an arm exoskeleton: the L-Exos

15 Loconsole Claudio, Ph.D. Discussion, Scuola Superiore Sant’Anna – 16 th November 2012 1. Safety: reachability of the target If the target is in an unreachable position dangerous situations can arise Our solution to this issue is a fast method to calculate the proxy point The proxy point is a safe point that has the following features: Reachable point nearest to the target It is on the line that links the present end effector position with the target Target point Proxy point Present L-Exos end effector position L-Exos trajectory planning: the four issues to overcome

16 Loconsole Claudio, Ph.D. Discussion, Scuola Superiore Sant’Anna – 16 th November 2012 1. Safety: reachability of the target If the target is in an unreachable position dangerous situations can arise Our solution to this issue is a fast method to calculate the proxy point The proxy point is a safe point that has the following features: Reachable point nearest to the target It is on the line that links the present end effector position with the target Target point Proxy point Present L-Exos end effector position L-Exos trajectory planning: the four issues to overcome 2. Moving targets In a neurorehabilitation therapy the target to follow and to reach can be moved So trajectory planning techniques have to provide this feature and to give the patient the possibility to perform the task in a very comfortable way

17 Loconsole Claudio, Ph.D. Discussion, Scuola Superiore Sant’Anna – 16 th November 2012 L-Exos trajectory planning: the four issues to overcome Fixed cartesian position x = [ -0.35 0.10 0.77] q1 = [-54.5 -44.2 51.6 -29.4] q2 = [-70.9 -37.2 6.5 -29.4] q3 = [-89.6 -41.7 -41.1 -29.4]. 4. Concavity of Cartesian workspace In concave workspace it is not always possible to follow a straight path to connect any two reachable points Concave workspace Spherical robot Polar robot 3. Redundancy More than one pose in joint space corresponds to a position of the end effector in the operative space

18 Loconsole Claudio, Ph.D. Discussion, Scuola Superiore Sant’Anna – 16 th November 2012 Reachability issue L-Exos workspace bounding box The test of reachability in real-time is a fundamental requirement for safe work condition Two possible approaches to the problem are: oAnalytical: for redundant robot -> reachability test is time consuming oNumerical: faster 1)Offline workspace discretization for fast IK and reachability test We follow the latter approach: we discretize the workspace leading to the creation of a 3D look-up table

19 Loconsole Claudio, Ph.D. Discussion, Scuola Superiore Sant’Anna – 16 th November 2012 Reachability issue 1 cm The test of reachability in real-time is a fundamental requirement for safe work condition Two possible approaches to the problem are: oAnalytical: for redundant robot -> reachability test is time consuming oNumerical: faster We follow the latter approach: we discretize the workspace leading to the creation of a 3D look-up table 1)Offline workspace discretization for fast IK and reachability test

20 Loconsole Claudio, Ph.D. Discussion, Scuola Superiore Sant’Anna – 16 th November 2012 Reachability issue Inner discretized concave workspace The test of reachability in real-time is a fundamental requirement for safe work condition Two possible approaches to the problem are: oAnalytical: for redundant robot -> reachability test is time consuming oNumerical: faster We follow the latter approach: we discretize the workspace leading to the creation of a 3D look-up table We work in the inner workspace, so kinematic singularities and loss of functionality issues are completely solved 1)Offline workspace discretization for fast IK and reachability test

21 Loconsole Claudio, Ph.D. Discussion, Scuola Superiore Sant’Anna – 16 th November 2012 Reachability issue The test of reachability in real-time is a fundamental requirement for safe work condition Two possible approaches to the problem are: oAnalytical: for redundant robot -> reachability test is time consuming oNumerical: faster We follow the latter approach: we discretize the workspace leading to the creation of a 3D look-up table We work in the inner workspace, so kinematic singularities and loss of functionality issues are completely solved 1)Offline workspace discretization for fast IK and reachability test...... 3D point 3D look-up table Reachable or not Reachable?

22 Loconsole Claudio, Ph.D. Discussion, Scuola Superiore Sant’Anna – 16 th November 2012 2) Online proxy computing for safe work condition When the target is visible but not reachable we compute the proxy point The proxy point is defined as the closest point to the target inside the workspace of the manipulator that belongs to the line which links the present position of the end- effector and the target point Target Proxy point Present L-Exos end effector position Safety issue solution

23 Loconsole Claudio, Ph.D. Discussion, Scuola Superiore Sant’Anna – 16 th November 2012 Developed trajectory planning techniques (1/3) To overcome also the other issues 3 on-line trajectory planning techniques were developed. They are respectively based on: 1.discretization of operative space 2.artificial potential field theory. 3.synchronized bounded jerk joints 1)On-line discretized trajectory planning:  the planning is performed in operative space following some control points  these points are chosen with graph search algorithms

24 Loconsole Claudio, Ph.D. Discussion, Scuola Superiore Sant’Anna – 16 th November 2012 Developed trajectory planning techniques (2/3) 2)On-line artificial potential field trajectory planning:  the planning is performed both in joint space and in operative space  it allows to avoid obstacles during target reaching tasks  there are 4 forces/torques applied to the L-Exos: 1.Attractive force on the end effector exerted by the target to reach 2.Repulsive force on the end effector exerted by the obstacle safety zone 3.Repulsive force on the last link exerted by the obstacle safety zone 4.Repulsive torque on every joint exerted by mechanical stops

25 Loconsole Claudio, Ph.D. Discussion, Scuola Superiore Sant’Anna – 16 th November 2012 3) Online Fully Synchronized Bounded Jerk trajectory planning method (OFSBJ)* assures: -full synchronization for each joint -low computational time for real-time purposes -no need of iterative/optimization processes -no collisions with mechanical stops -Exploitation of the redundancy to provide a typical human behavior during reaching task (elbow down and minimum distance movements ) Developed trajectory planning techniques (3/3)

26 Loconsole Claudio, Ph.D. Discussion, Scuola Superiore Sant’Anna – 16 th November 2012 Ability to simulate the human movements during reaching tasks *Antonio Frisoli, PhD; Claudio Loconsole; Riccardo Bartalucci; Massimo Bergamasco “A new bounded jerk on-line trajectory planning for mimicking human movements in robot- aided neurorehabilitation”, Robotics and Autonomous Systems (2012)

27 Loconsole Claudio, Ph.D. Discussion, Scuola Superiore Sant’Anna – 16 th November 2012 Technical issues in reverse: 3. Think to move towards the object Brain Computer Interface (BCI): the motor imagery approach EEG signals measured over the motor cortex in the  (8-12Hz) and  (12-24 Hz) bands these rhythmic components are affected by the movement imagination, generating an Event-Related Desynchronization (ERD) associated to the contralateral primary motor area related to the limb involved in the imaginary motor task that can be used to drive or control an external device

28 Loconsole Claudio, Ph.D. Discussion, Scuola Superiore Sant’Anna – 16 th November 2012 Technical issues in reverse: 3. Think to move towards the object a pattern of thirteen active electrodes was placed over the sensorimotor cortex EEG signals acquisition frequency = 256 Hz Two states classified: move right arm vs. rest state The online data processing begins with a projection of the input channels with a Common Spatial Pattern (CSP) filter: supervised spatial filtering method for two-class discrimination problems, which finds directions that maximize variance for one class and, at the same time, minimize variance for the other class

29 Loconsole Claudio, Ph.D. Discussion, Scuola Superiore Sant’Anna – 16 th November 2012 Technical issues in reverse: 2. Locate the object in the space

30 Loconsole Claudio, Ph.D. Discussion, Scuola Superiore Sant’Anna – 16 th November 2012 Ability to follow generic textured objects without the need of a preliminary training for object recognition  Fast modeling of real objects The algorithm assumes knowledge of the intrinsic parameters (focal length, center of projection) of the Kinect camera and of its pose with respect to the L-Exos frame of reference. ALGORITHM Inizialization phase Run-time phase Target localization and tracking module (1/5)

31 Loconsole Claudio, Ph.D. Discussion, Scuola Superiore Sant’Anna – 16 th November 2012 Ability to follow generic textured objects without the need of a preliminary training for object recognition  Fast modeling of real objects The algorithm assumes knowledge of the intrinsic parameters (focal length, center of projection) of the Kinect camera and of its pose with respect to the L-Exos frame of reference. Target localization and tracking module (1/5)

32 Loconsole Claudio, Ph.D. Discussion, Scuola Superiore Sant’Anna – 16 th November 2012 Target localization and tracking module (2/5) Inizialization phase (t < 1 s) RANSAC over 3D point cloud (RANdom SAmple Consensus) Point cloud over 3D point CLUSTERING cloud Projection to 2D of each single clustered object & 2D SURF (Speeded Up Robust Feature) feature extraction (FOOTPRINT)

33 Loconsole Claudio, Ph.D. Discussion, Scuola Superiore Sant’Anna – 16 th November 2012 Target localization and tracking module (3/5) Run-time phase Smaller number of pixels interested in the 2D SURF extraction, faster algorithm

34 Loconsole Claudio, Ph.D. Discussion, Scuola Superiore Sant’Anna – 16 th November 2012 Target localization and tracking module (4/5) Run-time phase Lucas-Kanade tracking of SURF feature footprints in 2D image space SURF features are not extracted when LK is active Estimated object position in 2D image space = the center of mass of the 2D SURF feature cloud (only the number of SURF features is greater than a threshold) 2D  3D “unprojection” Conversion of the position with respect to the L-Exos reference framework But what about occlusion condition?

35 Loconsole Claudio, Ph.D. Discussion, Scuola Superiore Sant’Anna – 16 th November 2012 Target localization and tracking module (5/5) Run-time phase – the OSP (executed for each object) No. of extracted SURF features of the FOOTPRINT < No. of feature_threshold (heuristically set to 6)? N Y Object Search Procedure (OSP) LK algorithm (Partial and/or total) Occlusion condition

36 Loconsole Claudio, Ph.D. Discussion, Scuola Superiore Sant’Anna – 16 th November 2012 The target tracking module & L-Exos @ work

37 Loconsole Claudio, Ph.D. Discussion, Scuola Superiore Sant’Anna – 16 th November 2012 Performances and robustness of the Target localization and tracking module Accuracy of tracking position estimation  Three tested distances: 500 mm, 700 mm, 900 mm  Translation of three different offset (10 mm, 20 mm, 50 mm) along both the Kinect Z axis (i.e. the camera principal direction corresponding to the depth) and the X axis c c

38 Loconsole Claudio, Ph.D. Discussion, Scuola Superiore Sant’Anna – 16 th November 2012 Performances and robustness of the Target localization and tracking module Average execution time of a tracking iteration* Only if OSP On OSP Off (common work state) f > 30 Hz OSP On (rare work state) f ~ 4-5 Hz *TEST BENCH: Intel Core2 Quad Q9550 PC with 3 GB RAM and Windows 7 operating system.

39 Loconsole Claudio, Ph.D. Discussion, Scuola Superiore Sant’Anna – 16 th November 2012 Performances and robustness of the Target localization and tracking module Robustness to light variation: three different light conditions ( black pixels are due to spatial filter, not to light condition) VERY INTENSIVE MEDIUM LOW

40 Loconsole Claudio, Ph.D. Discussion, Scuola Superiore Sant’Anna – 16 th November 2012 Performances and robustness of the Target localization and tracking module Robustness to occlusions

41 Loconsole Claudio, Ph.D. Discussion, Scuola Superiore Sant’Anna – 16 th November 2012 Performances and robustness of the Target localization and tracking module Robustness to object roto-translation

42 Loconsole Claudio, Ph.D. Discussion, Scuola Superiore Sant’Anna – 16 th November 2012 Technical issues in reverse: 1. Look and select the object in 3D real world space IR emitter + IR camera Scene camera

43 Loconsole Claudio, Ph.D. Discussion, Scuola Superiore Sant’Anna – 16 th November 2012 Technical issues in reverse: 1. Look and select the object in 3D real world space To overcome the low quality scene (analogic 320x240 pixels) and keep the overall system complexity low (avoiding the use of an additional head tracker for absolute gaze direction estimation) Robust system because it has only to understand which object the user is looking at and not estimate the object’s position Gaze point No action region

44 Loconsole Claudio, Ph.D. Discussion, Scuola Superiore Sant’Anna – 16 th November 2012 Typical sequence of subsystem integration: eye-tracker + tracking system + trajectory planner (L-Exos) 1.The saccade brings the gaze to select an object; 2.The object 3D position is located in the space 3.The proxy point is calculated 4.The on-line trajectory planner allows to reach the object

45 Loconsole Claudio, Ph.D. Discussion, Scuola Superiore Sant’Anna – 16 th November 2012 An eye tracking system for MoCap purposes Eye parametrization for MoCap purposes Pupil Aspect Ratio (PAR) Angular eye movements Eye behaviors characterization Solutions to typical technical problems

46 Loconsole Claudio, Ph.D. Discussion, Scuola Superiore Sant’Anna – 16 th November 2012 Experimental description and results Experimental evaluation to test the usability of the integrated system on a group of 7 subjects: 3 healthy volunteers (all male, 27±7 years old 4 chronic stroke patients (3 male 58 ± 7 yrs old, 1 female 71 yrs old) 1)Training session:  Subject wearing the BCI system in front of a display  Only a visual feedback of a virtual arm controlled through motor imagery  repeatedly asked either to perform an imaginary movement of their right arm or to mentally hold a rest state  Randomly sorted sequence of 40 trials, 20 for each task  Each task execution (i.e ’rest’ or ’movement’) lasted for 5 seconds timely spaced with a interval lasting randomly from 3 to 5 seconds, during which the subject could relax concentration  The data acquired during the training session were used to compute the CSP filter parameters and the SVM weights

47 Loconsole Claudio, Ph.D. Discussion, Scuola Superiore Sant’Anna – 16 th November 2012 Experimental description and results 2) Visual condition test:  Same experimental setup of the training session  real-time feedback of the BCI classification by means of a virtual arm shown on the display provided to the subject  During each task execution period, the virtual arm moved with constant velocity in case of a correct “movement” classification of the actual extracted feature, thus providing to the subject a realtime feedback of his brain activity

48 Loconsole Claudio, Ph.D. Discussion, Scuola Superiore Sant’Anna – 16 th November 2012 Experimental description and results  3) Robot condition test: ( integrated BRAVO system) The subject wears the L-Exos and has to: mentally hold the rest state in case of a ”rest” task; select by gaze the target he/she intended to reach and to imagine of reaching with the impaired limb the selected real object in case of a ”movement” task The parameters of the BCI were initialized accordingly to the training phase conducted with only the BCI module After the selection performed by gaze, the subject was able to trigger the start (brain activity in the ”movement class”) of the robot movement towards the selected object through the BCI interface The output of the BCI module is used to on-line adjust the maximum jerk, acceleration and speed values for each joint of the robot After having reached the selected object, the robot was controlled to bring back automatically the patient arm to the start position Randomly sorted sequence of 40 trials, 20 for each task

49 Loconsole Claudio, Ph.D. Discussion, Scuola Superiore Sant’Anna – 16 th November 2012 Experimental description and results The output of the BCI module is used to on-line adjust the maximum jerk, acceleration and speed values for each joint of the robot After having reached the selected object, the robot was controlled to bring back automatically the patient arm to the start position Randomly sorted sequence of 40 trials, 20 for each task

50 Loconsole Claudio, Ph.D. Discussion, Scuola Superiore Sant’Anna – 16 th November 2012 Experimental description and results Antonio Frisoli, Claudio Loconsole, Daniele Leonardis*, Filippo Bannó, Michele Barsotti, Carmelo Chisari, Massimo Bergamasco “A new gaze-BCI-driven control of an upper limb exoskeleton for rehabilitation in real world tasks”, IEEE Transactions on Systems, Man, and Cybernetics, Part C: Applications and Reviews (2012)

51 Loconsole Claudio, Ph.D. Discussion, Scuola Superiore Sant’Anna – 16 th November 2012 Basic Life Support training Preamble: Cardiac arrest is responsible for more than 60% of adult deaths Issues: 1.Survival reduced by 7-10% each minute CPR delayed. Immediate CPR in a cardiac arrest can improve the chances of survival by up to a factor of three 2.Outcome after cardiac arrest is dependent on the quality of chest compressions (immediate feedbacks during training needed) Several devices have been developed to provide guidance during Cardio- Pulmonary Resuscitation (CPR), but more immediate feedbacks (and cheaper products) are needed Solution: the Mini-VREM project tries to solve both issues using a cheap RGB-D camera and proposing a more sustainable and diffusible quality BLS training

52 Loconsole Claudio, Ph.D. Discussion, Scuola Superiore Sant’Anna – 16 th November 2012 The Mini-VREM prototype and its validation

53 Loconsole Claudio, Ph.D. Discussion, Scuola Superiore Sant’Anna – 16 th November 2012 The Mini-VREM prototype and its validation

54 Loconsole Claudio, Ph.D. Discussion, Scuola Superiore Sant’Anna – 16 th November 2012 The Mini-VREM prototype and its validation Randomised crossover pilot study -80 volunteers (40 healthcare professionals + 40 lay people) -Session: 1) two minutes CC trial 2) 1-hour pause and a second two minutes CC trial 3) 2 groups: -1st group (n=40) performed CC with Mini-VREM feedback (FB) followed by CC without feedback (NFB) -2nd second group (n=40) performed vice versa. -Primary endpoints: 1)compressions rate within 100-120 min -1 2) compressions depth within 50-60 mm -Secondary endpoints: 1)the change in CC rate and depth during the feedback and no feedback performance according to the sequence of training and pre-existing experience

55 Loconsole Claudio, Ph.D. Discussion, Scuola Superiore Sant’Anna – 16 th November 2012 The Mini-VREM prototype validation results Quantitative evaluation: When compared to the performance without feedback, with Mini-VREM: 1. feedback compressions were more adequate (FB 35.78% vs. NFB 7.27%, p<0.001); 2.more compressions achieved target rate (FB 72.04% vs. 31.42%, p<0.001) 3.more compressions achieved target depth (FB 47.34% vs. 24.87%, p=0.002)

56 Loconsole Claudio, Ph.D. Discussion, Scuola Superiore Sant’Anna – 16 th November 2012 The Mini-VREM prototype validation results Overall consideration: The Mini-VREM system was able to improve significantly the CC performance by healthcare professionals and by lay people in a simulated CA scenario, in terms of compression rate and depth Qualitative evaluation of Mini-VREM among the population (in terms of user-friendliness, feedback visibility and audibility and overall feedback effectiveness on the rate and depth) according to the seven-point Likert-scale results, Mini- VREM was perceived as easy to use with a good feedback presentation and providing effective guidance.

57 Loconsole Claudio, Ph.D. Discussion, Scuola Superiore Sant’Anna – 16 th November 2012 Other HMI applications: HMI apps MoCap facial system Intention Recognition Marker-less 3D Kinect-based system for facial anthropometric measurements Real-time geometrical facial features method for Automatic Emotion Recognition

58 Loconsole Claudio, Ph.D. Discussion, Scuola Superiore Sant’Anna – 16 th November 2012 Marker-less 3D Kinect-based system for facial anthropometric measurements oThrough the joint use of the FaceTracker* software and the Kinect device, it is possible to detect facial landmarks in the 3D space without wearing markers oOnce the spatial position of facial landmarks is known it is possible to measure the distances between some couple of them *J. Saragih, S. Lucey, and J. Cohn, “Deformable model fitting by regularized landmark mean-shift,” International Journal of Computer Vision, pp. 1–16, 2011. 66 landmarks 13 landmarks 11 measurements

59 Loconsole Claudio, Ph.D. Discussion, Scuola Superiore Sant’Anna – 16 th November 2012 59 oSoftware pipeline and experiments Software pipeline Comparison experiment modalities Comparison experiment results (3 methods; 36 subjects; 11 facial landmark distances) Systematic error Marker-less 3D Kinect-based system for facial anthropometric measurements

60 Loconsole Claudio, Ph.D. Discussion, Scuola Superiore Sant’Anna – 16 th November 2012 Objective: Emotion Classifier based on geometrical facial features Using ellipsoid eccentricity and normalized linear measurement to recognize facial emotion starting from facial landmarks 3 face regions for a total of 8 ellipsoids + 3 linear measurements Example of construction of the upper mouth ellipsoid Calculating average of upper three lip landmarks Calculating axis of the ellipsoid Constructing the ellipsoid and calculating its eccentricity e = sqrt(a 2 -b 2 )/a [0,1] e = 0 For a perfect circle e = 1 For a segment... Eccentricity Real-time geometrical facial features method for Automatic Emotion Recognition

61 Loconsole Claudio, Ph.D. Discussion, Scuola Superiore Sant’Anna – 16 th November 2012 Real-time geometrical facial features method for Automatic Emotion Recognition There are several example of application developed for automatic emotion recognition (Ekman et al., 1978; Fischer, 2004; Zhang et al., 2012, …) The main objectives related to tool creation for automatic emotion recognition: 1.real-time requirement: communication between humans is a real time process with a time scale order of about 40 milliseconds* 2.capacity of recognition of multiple standard emotions on people with different anthropometric facial traits 3.capacity of recognition of the facial emotions without neutral face comparison calibration *Bartlett et al., ”Real time face detection and facial expression recognition: Development and applications to human computer interaction”In Computer Vision and Pattern Recognition Workshop, 2003. CVPRW’03.

62 Loconsole Claudio, Ph.D. Discussion, Scuola Superiore Sant’Anna – 16 th November 2012 Using ellipsoid eccentricity to recognize facial emotion Total number of ellipsoids = 8 1) two for the mouth (upper and lower ellipsoids) 2) Two for each eye (total of 4: two upper and two lower ellipsoid) 3) One for each eyebrown (total of 2: two upper ellipsoids) Real-time geometrical facial features method for Automatic Emotion Recognition

63 Loconsole Claudio, Ph.D. Discussion, Scuola Superiore Sant’Anna – 16 th November 2012 Real-time geometrical facial features method for Automatic Emotion Recognition vs Example

64 Loconsole Claudio, Ph.D. Discussion, Scuola Superiore Sant’Anna – 16 th November 2012 Picture database structure (the Radboud database) OriginSexDirectionState 1. Caucasian1. female1. frontal1. angry 2. Moroccan2. male2. left2. contemptuos 3. Kid 3. right3. disgusted 4. fearful 5. happy 6. neutral 7. sad 8. surprised Population statistics Total number of people: 67 ( 39 caucasians adults, 18 Moroccans, 10 caucasian kids) ( 42 male, 25 female) For each emotional state, the three directions are taken) Total number of pictures:1608 Validated number of pictures after processing with FaceTracker: 1385 Real-time geometrical facial features method for Automatic Emotion Recognition

65 Loconsole Claudio, Ph.D. Discussion, Scuola Superiore Sant’Anna – 16 th November 2012 Considered Feature subsets 1. only linear features (S1: 3 elements); 2. only eccentricity features (S2: 8 elements); 3. both eccentricity and linear features (S3: 11 elements); 4. differential eccentricity and linear features with respect to those calculated for neutral emotion face (S4: 11 elements); 5. all features corresponding to the union of S3 and S4 (S5:22 elements). Real-time geometrical facial features method for Automatic Emotion Recognition Intra-person-independent Intra-person-dependent

66 Loconsole Claudio, Ph.D. Discussion, Scuola Superiore Sant’Anna – 16 th November 2012 Real-time geometrical facial features method for Automatic Emotion Recognition Time results: Decoupling the time required for the FaceTracker processing (to identify facial landmarks), the calculation of our proposed features requires 1.9 milliseconds In the best case (using physical facial markers), it is possible to reach work frequency higher than 500 Hz. The performances are limited only by the camera technical limits (e.g. 30 Hz)

67 Loconsole Claudio, Ph.D. Discussion, Scuola Superiore Sant’Anna – 16 th November 2012 Real-time geometrical facial features method for Automatic Emotion Recognition

68 Loconsole Claudio, Ph.D. Discussion, Scuola Superiore Sant’Anna – 16 th November 2012 Upper limb rehabilitation: the BRAVO project Brain computer interfaces for Robotic enhanced Action in Visuo-motOr tasks (BRAVO) Three-year project (2010-2013) funded by the Italian Institute of Technology (IIT), in the assistive and rehabilitation robotics field Consortium partners: oPERCRO Laboratory, Scuola Superiore Sant'Anna (SSSA) oUniversitá degli Studi di Bologna oINAIL (Italian National Institute for Insurance against Accidents at Work) Prosthesis Centre.

69 Loconsole Claudio, Ph.D. Discussion, Scuola Superiore Sant’Anna – 16 th November 2012 Mini-VREM project Basic Life Support training: the Mini-Virtual Reality Enhanced Mannequin project (yet not funded project ) Partners: oFederico Semeraro (Specialty Doctor Anaesthesia and Intensive Care Maggiore Hospital Bologna, Italy) oAntonio Frisoli (Associate Professor, Scuola Superiore Sant'Anna Pisa, Italy) oClaudio Loconsole (PhD Student, Scuola Superiore Sant'Anna Pisa, Italy) oFilippo Bannò (PhD Student, Scuola Superiore Sant'Anna Pisa, Italy) oLuca Marchetti (CEO Studio Evil, d-Sign srl Bologna, Italy) oUmberto Olcese (Post Doc - Neuroscience and Brain Technologies, Istituto Italiano di tecnologia) o Erga Cerchiari (Clinical Director Anaesthesia and Intensive Care Maggiore Hospital Bologna, Italy | President Italian Resuscitation Council)

70 Loconsole Claudio, Ph.D. Discussion, Scuola Superiore Sant’Anna – 16 th November 2012 Other HMI applications: VERE project – WP3-6 Virtual Embodiment and robotic Re-Embodiment Integrated Project funded under the European Seventh Framework Program, Future and Emerging Technologies (FET), Grant Agreement Number 257695 Visiting Ph.D. student @ Porto Interactive Center (PIC) Lab, located at Faculdade de Ciências da Universidade do Porto, Departamento de Ciências de Computadores

71 Loconsole Claudio, Ph.D. Discussion, Scuola Superiore Sant’Anna – 16 th November 2012 Publications (1/2) On peer-reviewed journals 1.A. Frisoli, C. Loconsole, D. Leonardis, F. Bannò, M. Barsotti, M. Bergamasco, “A new BCI, gaze and Kinect- based active guidance mode for upper limb robot-aided neurorehabilitation in real world tasks ”, IEEE Transactions on Systems, Man, and Cybernetics - Part C: Applications and Reviews 2012 (IF 2011: 2.02) 2.A. Frisoli, C. Loconsole, R. Bartalucci, M. Bergamasco, “A new bounded jerk on-line trajectory planning for mimicking human movements in robot-aided neurorehabilitation”, Robotics and Autonomous Systems, 2012 (IF 2011: 1.056) On peer-review ed journals (conditionally accepted) 3.F. Semeraro, A. Frisoli, C. Loconsole, F. Bannò, G. Tammaro, G. Imbriaco, L. Marchetti, E. L. Cerchiari, “Motion detection technology as a tool for cardiopulmonary resuscitation (CPR) quality training: a randomised crossover manikin pilot study” - Resuscitation, 2012 (IF 2011: 3.601) 4.C.Loconsole, P. Tripicchio, A. Piarulli, E. Ruffaldi, F. Tecchia, M. Bergamasco, “On multiple user perspectives in passive stereographic virtual environments”Computer Animation and Virtual Worlds, 2012 (IF 2011: 0.394) On book chapters 5.L. Sterpone, F. Collino, G. Camussi, C. Loconsole, “Analysis and clustering of microRNA array: a new efficient and reliable computational method ” chapter in “Advances in Experimental Medicine and Biology, 1, Volume 696, Software Tools and Algorithms for Biological Systems”, Part 8, Pages 679-688, Springer (The Netherlands), 2011 On peer-reviewed conferences 6.C. Loconsole, R. Bartalucci, A. Frisoli, M. Bergamasco, “An online trajectory planning method for visually guided assisted reaching through a rehabilitation robot” International Conference on Robotics and Automation, May 9-13 2011– Shanghai (China) 7.C. Loconsole, R. Bartalucci, A. Frisoli, M. Bergamasco, “A new gaze-tracking guidance mode for upper limb robot-aided neurorehabilitation” World Haptics Conference,June 22-24 May 9-13 2011– Istanbul (Turkey) 8.M. Bergamasco, A. Frisoli, M. Fontana, C. Loconsole, D. Leonardis, M. Troncossi, M. M. Foumashi, V. Parenti Castelli, “Preliminary results of BRAVO Project, Brain Computer Interface for Robotic Enhanced Rehabilitation” International conference on Rehabilitation Robotics, ICORR 2011, June 29- July 1 2011, Zurich, Switzerland

72 Loconsole Claudio, Ph.D. Discussion, Scuola Superiore Sant’Anna – 16 th November 2012 Publications (2/2) On peer-review ed conferences 9.M. A. Padilla, S. Pabon, A. Frisoli, E. Sotgiu, C. Loconsole, M. Bergamasco, ” Hand and Arm Ownership Illusion through Virtual Reality Physical Interaction and Vibrotactile Stimulations” Eurohaptics, July 8th – 10th, 2010 – Amsterdam (Holland) 10.C. Loconsole, N. Barbosa, A. Frisoli, V. Costa Orvalho, ”A new marker-less 3D Kinect-based system for facial anthropometric measurements”, VII Conference on Articulated Motion and Deformable Objects, AMDO 2012, Andratx, Mallorca, Spain. 11.C. Loconsole, F. Bannò, A. Frisoli, M. Bergamasco, ”A new Kinect-based guidance mode for upper limb robot-aided neurorehabilitation”, IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) 2012, Vilamoura,, Algarve, Portugal, October 7th-12nd 2012 BEST STUDENT PAPER FINALIST 12. F. Semeraro, A. Frisoli, C. Loconsole, F. Bannò, G. Tammaro, G. Imbriaco, L. Marchetti, E. Cerchiari, ” Mini-VREM (Virtual Reality Enhanced Mannequin) project: a prospective, randomised crossover design study on healthcare professionals and lay rescuers”, Decennale MIMOS – Movimento Italiano Modellazione e Simulazione, Roma, 9-11 Ottobre 2012 On peer-review ed conferences 13.F. Semeraro, A. Frisoli, C. Loconsole, F. Bannò, L.Marchetti, E.L. Cerchiari, ” Mini-VREM (Virtual Reality Enhanced Mannequin) project: motion detection technology as a tool for cardiopulmonary resuscitation (CPR) quality improvement”, La Medicina Incontra la Realtà Virtuale: Applicazioni in Italia della Realtà Virtuale in Medicina e Chirurgia, Bologna, (Italy) November 3rd, 2011 Submitted to peer-review ed conferences 14.C. Loconsole, C. Runa, G. Augusto, V. Orvalho,” Real-time geometrical facial features method for Automatic Emotion Recognition”, Eurographics 2013, May 6th - 10th Girona, Spain 15.C. Loconsole, A. Frisoli, E. Sotgiu, A. di Fava, M. Banitalebi Dehkordi, M. Fontana, M. Bergamasco, ” Supervised haptic- based guidance using IMU in indoor environments for visually impaired people ” International Conference on Robotics and Automation, May 6-10 2011–Karlsruhe, (Germany) 16.C. Loconsole, D. Leonardis, M. Barsotti, A. Frisoli, M. Solazzi, M. Bergamasco, M. Troncossi, M. M. Foumashi, C. Mazzotti, V. Parenti Castelli, ” EMG-based robotic-assisted system for bilateral training of hands” World Haptics Conference, April 14-17 2013– Daejeon (Korea)

73 Loconsole Claudio, Ph.D. Discussion, Scuola Superiore Sant’Anna – 16 th November 2012 Prizes 1) World Simulink Student Challenge 2011 Claudio Loconsole, “Reaching with Matlab/SIMULINK” 1 st place winner 2) Best student paper finalist IROS 2012 C. Loconsole, F. Bannò, A. Frisoli, M. Bergamasco, ”A new Kinect-based guidance mode for upper limb robot- aided neurorehabilitation” IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) 2012, Vilamoura,, Algarve, Portugal, October 7th-12nd 2012 3) "Inventare il futuro" categoria "Infanzia, terza etá e disabilitá«, Universita' di Bologna e Fondazione del Monte di Bologna e Ravenna 2011 Claudio Loconsole, Daniele Leonardis – Braille Painter 2 nd place winner 4) Start-CUP CNR 2012 Claudio Loconsole, Federico Semeraro, Antonio Frisoli, Filippo Bannó, Umberto Olcese Mini-VREM project Top-3 finalist category: "life sciences " 5) Digit@lia for talent Claudio Loconsole, Federico Semeraro, Antonio Frisoli, Filippo Bannó, Erga Cerchiari Mini-VREM project Special Mention

74 Loconsole Claudio, Ph.D. Discussion, Scuola Superiore Sant’Anna – 16 th November 2012 Thanks for your attention (Obrigado pela vossa atenção) and thanks to (e obrigado a) Antonio Frisoli, Massimo Bergamasco, Veronica Costa Orvalho & PIC people, Vitoantonio Bevilacqua, Massimiliano Solazzi, Daniele Leonardis, Filippo Bannó, Edoardo Sotgiu, Michele Barsotti, Caterina Procopio, Carmelo Chisari, Federico Semeraro, Carlo Alberto Avizzano, Raffaello Brondi, Marcello Carrozzino, Miguel Castaneda, Federico Ciardi, Chiara Evangelista, Gabriele Facenza, Francesca Farinelli, Alessadro Filippeschi, Marco Fontana, Paolo Gasparello, Emanuele Giorgi, Basilio Lenzo, Vittorio Lippi, Alessandro Nicoletti, Umberto Olcese, Dario Pellegrinetti, Andrea Piarulli, Emanuele Ruffaldi, Fabio Salsedo, Elisabetta Sani, Massimo Satler, Alessandra Scucces, Franco Tecchia, Paolo Tripicchio, Federico Vanni, Rocco Vertechy c.loconsole@sssup.it (Perguntas?) QUESTIONS ?


Download ppt "Loconsole Claudio, Ph.D. Discussion, Scuola Superiore Sant’Anna – 16 th November 2012 Relation Advances in human-machine interaction for Upper Limb Rehabilitation."

Similar presentations


Ads by Google