Presentation is loading. Please wait.

Presentation is loading. Please wait.

Augmenting spatial awareness with the haptic radar

Similar presentations


Presentation on theme: "Augmenting spatial awareness with the haptic radar"— Presentation transcript:

1 Augmenting spatial awareness with the haptic radar
Álvaro Cassinelli, Carson Reynolds & Masatoshi Ishikawa The University of Tokyo

2 Time to (re)grow antennas on people (and machines)?
Concept & Motivation Antennae, hairs and cilia precede eyes in evolutionary development Efficient for short-range spatial awareness (unambiguous, computationally inexpensive) Robust (insensitive to illumination & background) Easily configurable (hairs at strategic locations) and potentially omni-directional + Today’s MOEMS technology enables mass produced, tiny opto-mechanical devices... Develop insect-like artificial antennae antenna precede eyes in evolutionary development. They provide a simple and efficient method for perceiving space. Antennae present unique advantages as sensors direct and computationally inexpensive range perception insensitive to illumination conditions Refigure hair and antenna as a useful sensorial modality for people and machines: Conclusion: A wearable, augmenting sensing device, a double skin. For the people who where here yesterday and heard professor Inami’s talk: this clearly goes in the direction of “X-men computing” Time to (re)grow antennas on people (and machines)?

3 An opto-mechanical hair?
Hair shaft is a steerable beam of light (a laser-radar). Modular, but interconnected structure (artificial skin) Local, range-to-tactile stimulation Active scanning of the surrounding: Proprioception-based Automatic sweeping of the surroundings to extract important features (inspired by animal whiskers’ motion, two-point touch technique, etc) Infrared or ultrasound rangefinder sensors can be used too, but won’t be mobile.

4 Optical Hair module structure
External world Artificial hair module Electronic input/output (for electronic devices, robots, etc). Steerable laser beam User Electronic driver & interface MOEMS based Laser Radar Tactile output (for humans) Input real time 3D measures tracking surface roughness, etc. Output laser display laser cueing Interconnection network (to other modules)

5 Possible applications
Augmented spatial awareness & sensing Electronic Travel Aid for the visually impaired . Augmented spatial awareness for motorcycle drivers and workers in hazardous environments. Collision avoidance (robotic limbs, vehicles, etc). Augmented sensing & tele-sensing (texture, speed). Input… … but also Human-Machine interface technology: “Hairy electronics”: versatile human- computer interface Sensitive spaces: human-aware “hairy” architecture Display: laser-based vectorial graphics, laser annotation on surrounding (augmented reality, attentional cues) Thanks to the its reconfigurability, the proposed concept (module formed by coupling a range-detector and a tactile stimulator) can target very different areas. … and output!

6 ( ) Laser-based module: feasability* MEMS galvano-mirrors
Concept (“hairy electronics”)… … opto-mechanical implementation MEMS galvano-mirrors Smart Laser Tracking (*) principle (can work as an antenna sweeping mode) (*) “The Smart Laser Scanner”, SIGCHI 2005

7 The haptic radar as a travel aid
A few fundamental questions: New sensorial modality: how easy to appropriate? (Would it be like re-exercising an atrophied one?) reflex reaction to range-to-tactile translation? Is the brain capable of intuitive integration of data from eyes on the back, the front, the sides…? Prototype characteristics and limitations: Configuration studied: headband with few modules. Limitation: non-mobile beam Two prototypes built: one without range-finders (simulated maze exploration), another with range- finders (but short range). I am going to show here a proof of principle of the modular range-to-tactile translation system Simultaneous, “full-horizon” spatial awareness possible: to my knowledge, this has never been explored before. Can we learn to integrate information from “eyes on the back (as well as on the sides)”? Is it possible to form and visualize a coherent 3D model of the world (with some reasonable training), or this 360 degrees of awareness will always impose a cognitive overload (i.e., we have to selectively pay attention to the “back”, as if we were looking there using back mirrors in a car). My guess is that it is possible, because we already integrate this information form the objects in our back (for instance, using the skin or the hairs). It’s very short range, but this are good news because the training is not targeted to the creation of an entirely new modality, but rather to the exercising of an existing one (perhaps atrophied?). No mobile beam: the user relies on body proprioception to give meaning to the tactile cues.

8 (a) Haptic Radar Simulator
Q: How participants deal with 360 of spatial awareness without previous training? Simulator features Six actuators & LED indicators No range-sensors (controlled virtual space) Adjustable horizon of visibility Perception modalities: proximity open-space

9 (a) Simulator demo

10 Simulator discussion orientation is rapidly lost => add compass?
Interactive horizon of visibility is a necessary feature “proximity feel” mode disturbing if many actuators vibrate at the same time => compute center of gravity “open-space” perception mode interesting, but counterintuitive (needs training). continuous range-to-vibration function not easy to interpret = > discretize levels (3 or 4 levels). Too few actuators/sensors (annoying jumping effect) vibrators need to be calibrated to produce same perceived effect (motors characteristics differ, as well as sensitivity on each site)

11 (b) Prototype with sensors
Q: Can participants avoid unseen object approaching from behind ? Prototype features: Six sensor & vibrators Non-steerable “hairs” (infrared sensors) Max range 80 cm (arm’s range)

12 Experiment Design / Results
Hypothesis: participants can avoid an unseen object approaching from behind N=10 participants, each with 3 trials In 26 of 30 trials, participants moved to avoid the unseen stimulus (p=1.26*10^-5). In follow-up questionnaire, participants viewed the system as: more of a help (p=0.005), easy (p=0.005), and intuitive (p=0.005).

13 (b) Collision avoidance demo

14 Discussion Immediate problems & possible improvements
Range detection too short (1 meter max) [ next prototype will use utrasound sensors (up to 6 meters), then laser rangefinders] Simultaneous stimulus confusing [ only one actuator active at any time, perhaps in the opposite direction (showing direction of clear path)] Low spatial resolution of actuators [ more vibrators / different actuators] Variable motor characteristics [ individual calibration] Range-to-tactile linear function too simplistic [ log scale / discrete] Effect of rotation is confusing in the simulator [ head tracking] Sense of direction is rapidly lost when there is no “reference background” [use “interactive horizon” technique & add compass cue] -Importance of pre-processing information in order to REDUCE THE AMOUNT OF INFORMATION. REM: works by Leslie Kay (Sonicguide), Tony Heyes (Sonic Pathfinder) and Allan Dodds ( “Heyes' approach is rather different from Kay's in that the Sonic Pathfinder deliberatily supplies only the minimum but most relevant information for travel needed by the user, whereas Kay strives for more information-rich sonar-based displays. “. Both systems present information as sound signals…

15 Future Research Directions
Ultrasound sensors (more range – up to 6 meters) MEMS based steerable laser beams (automatic sweeping) Evaluate other tactile actuators (skin stretch?) and tactile signals (ex: tactons). More compact MEMS device modules for more density Grid network of interconnected modules More comprehensive experiments: Can participants navigate through crowds? Can participants predict if an object will hit them? In the long term, is there habituation to vibration stimulus? - fMRI tests on this device to see how the brain learn to decode this information (using pneumatic actuators, as in Evaluation of a pneumatically driven tactile stimulator device for vision substitution during fMRI studies. Zappe AC, Maucher T, Meier K, Scheiber C Magn Reson Med Apr ; 51(4): Other interesting research directions: Study haptic radar for rehabilitation of hemi-negligent patients. Use the optical hair to write/annotate objects in the surrounding

16 Questions?

17 Proof-of-principle: Laser annotation
Laser display Visual cues (attentional cueing, augmented reality). Screen-less display Retinal display?

18 Expected (final) module performance
Module size: roughly 3x2 cm2 including laser, micromirrors and microcontroller electronics. Sampling rate: kilohertz range Range measurement: using just intensity and modulated laser diode, up to one or two meters. Angular “sweeping” speed: depends on selected micromirror (ex: 500 rad/s for MEL-ARI devices). Power: to study (at least 200mW/module…)

19 Smart Laser Scanning principle
laser excursion is intelligently confined to the area of interest Simplest laser trajectory for tracking: a circular laser “saccade”. Fast! (kHz range).


Download ppt "Augmenting spatial awareness with the haptic radar"

Similar presentations


Ads by Google