Presentation is loading. Please wait.

Presentation is loading. Please wait.

Hardware and system development:

Similar presentations


Presentation on theme: "Hardware and system development:"— Presentation transcript:

1 Hardware and system development:
Synchronisation and Data communication Verena V. Hafner, Claas-Norman Ritter, Guido Schillaci Project Meeting Erlangen, November 30, 2016

2 Deliverable D4.3 (M36) “Human-Robot Interaction”

3 T4.1 Learning internal models for interaction and prediction
Ego-noise prediction: Nao head movements top: 4 MFCC features bottom: head yaw initial joint configuration rotation applied to the joints from the initial positions

4 T4.1 Learning internal models for interaction and prediction
Ego-noise experiments (head movements, Mel log-filterbank energies) coherent not coherent shifted left: coherent center: not coherent right: shifted Mel log-filterbank energies

5 Main Achievements Y3 Internal model framework for ego-noise prediction on the Nao robot (UBER, FAU) Robot egosphere for visual and auditory attention (UBER, INRIA) Linking natural robot behaviour to audio information (BGU, UBER) Synchronisation of audio, video and motor signals on the Nao (ALD, UBER) Integration inside (12-mic) Nao robot

6 T1.2 Design of adaptive robomorphic microphone array
Preparatory work: UBER was involved in tackling the issues of synchronising audio signals, motor signals and camera data streams, as well as the communication between NaoQI and MATLAB that have been addressed in two workshops (code camps) in the reporting period. UBER exploited the functionalities provided by Modularity and implemented a set of (Modularity) filters for the gathering of audio, visual and motor data from the robot, for the alignments of asynchronous data into fixed buffers and for the training of internal models and ego-noise classification, as reported in Deliverable D4.2.

7 T1.2 Design of adaptive robomorphic microphone array
12-mic signal integration external sound card feeding signal back into robot synchronising with Aldebaran Modularity framework

8 T4.1 Learning internal models for interaction and prediction
in parallel: worked on synchronisation of audio, video and motor commands on the Nao 2 workshops on synchronisation in Berlin and Erlangen now ready to be integrated in our internal model framework

9 Tech details • data available on Nao • motor/sensor (≈ 10 ms) • images (≈ max 33 ms) • audio (≈ max 40 ms) data must be time stamped  audio only 4 channel on v5 head  no internal audio with 12 microphone prototype head  but 16 channel linux compatible USB-card since begin of April working on Nao

10 Tech details - efforts recompile kernel with USB audio support
setup package for all Naos lsl (lab streaming layer) as protocol for sending and outside processing with synchronized data NaoQi interfaces to the egosphere (can receive external data) e.g. python script reading matlab DOA outputs (BGU+IMPERIAL), NaoQi methods for communicating with the dialog manager (ALD) and face tracker (INRIA) Egosphere as central hub for behavioral mechanisms solved lots of open issues, such as initial delay of data processing up to several seconds + increasing delay also: synched data will be used for audio tracker (IMP)

11 Data Synchronisation Timing Diagram

12 Data Synchronisation Timing Diagram

13 Overview about Developed Algorithms

14 Internal Models Architecture
Addressed task learning and predicting ego noise Concept Self-Organizing Maps and Hebbian learning Data synchronisation Main benefit over existing approaches adaptivity online learning multimodal body representation Demonstration presented in D4.2 live demo at review meeting in Berlin video at IROS and RoboCup live demo at IROS tech tour in Berlin H. Löllmann: Overview about Developed Algorithms

15 T4.1 Learning internal models for interaction and prediction
Body representation - adaptive model Prediction / forward model Sensory attenuation Sense of agency / self-other distinction

16 Demo 1 showing internal models for ego-noise prediction

17 T4.1 Learning internal models for interaction and prediction
Ego-noise experiments (head movements, Mel log-filterbank energies) coherent not coherent shifted left: coherent center: not coherent right: shifted Mel log-filterbank energies

18 Egosphere and attention mechanisms
Addressed task Visual and auditory saliency detection Robot behaviors emerging from attentive mechanisms analysis of participants’ perception of robot behaviour Concept multimodal saliency maps synchronisation short-term memory mechanisms habituation and inhibition, display of robot internal attentive state Main benefit over existing approaches intuitive human-robot interaction integration of different algorithms + interfacing robot skills Demonstration Final real-time prototype live demo at review meeting Berlin H. Löllmann: Overview about Developed Algorithms

19 Demo showing visual and auditory attention using the robot egosphere, integration into EARS demo with algorithms from WP2 and WP3.

20 Deliverable D4.3 (M36) “Human-Robot Interaction”

21 Synchronisation and Data communication
Addressed task Synchronisation and Data communication Concept buffering and synchronization of multimodal data Main benefit over existing approaches prerequisite for many other algorithms Demonstration Real-time prototype H. Löllmann: Overview about Developed Algorithms

22 Data Synchronisation Timing Diagram

23 Data Synchronisation Timing Diagram

24 more algorithms simulation of robomorphic microphone positions
goal babbling for efficient sensorimotor learning sensory prediction attenuation for self-other distinction development of sense of object permanence


Download ppt "Hardware and system development:"

Similar presentations


Ads by Google