Presentation is loading. Please wait.

Presentation is loading. Please wait.

Wearable Virtual Guide for Indoor Navigation. Introduction Assistance for indoor navigation using a wearable vision system Novel cognitive model for representing.

Similar presentations


Presentation on theme: "Wearable Virtual Guide for Indoor Navigation. Introduction Assistance for indoor navigation using a wearable vision system Novel cognitive model for representing."— Presentation transcript:

1 Wearable Virtual Guide for Indoor Navigation

2 Introduction Assistance for indoor navigation using a wearable vision system Novel cognitive model for representing visual concept as a hierarchical structure

3 Introduction Indoor localization and navigation help in context-aware services Challenges of sensor-based localization – Infrastructure cost – Accuracy of localization – Reliability of signals Of-late recent advances in wearable vision devices (google glass) First person view (FPV)

4 Introduction Only camera to receive visual information, but it’s alone is not sufficient, human intelligence needs to be incorporated – Representation of navigation knowledge – Designing interactions between user and the system Building the cognitive model is necessary and this is absent in existing sensor-based systems

5 Introduction Mimic human behaviour of wayfinding Go through the glass door and turn left – Easier to follow, reduces stress from user – But requires cognitive knowledge of the building Contribution – Model to represent cognitive knowledge for indoor navigation – Interaction protocol for context-aware navigation

6 Methodology Cognitive Knowledge Representation Interaction Design with Context-awareness Area : (1)Shopping area, (2) Transition area ( and (3) Office area. The bottom level contains sub-classes of locations in each area. For example, the conceptual locations in the Shopping area are Lobby, MainEntrance, Shop, GateToLiftLobby, etc.; the locations in Transition area are LiftLobby, InLift, Entrance, etc.; the Office area has MeetingRoom, Junction, Corridor, Entrance/Exit etc. The nodes within each level are connected if there is a direct path between them

7 Cognitive Knowledge Representation Hierarchical context-model

8 Cognitive model Scenes are mapped to the location/area nodes – Image classification algo Generate cognitive route model Given source-destination – Chain of nodes --- area and location – Area nodes connects child location nodes Define Trip segments

9 Working principle In an actual navigation task with a given origin and destination – scenes (i.e., image sequences) are captured continuously using the wearable camera. The scenes are categorized into area nodes and location nodes, which are compared with the nodes in the cognitive route model. Once a match is found between a recognized node and that in the cognitive route model – specific navigation instructions are retrieved according to predefined rules.

10 Interaction Design with Context- awareness 3 types of contexts – Recognize egocentric scenes: localization of the user in the environment – Temporal context: right time when the instructions to be provided – User context: user’s cognitive status

11 Interaction Design with Context- awareness Data Collection – 6 participants in sequence – 3 destinations in sequence – M0 -> M1 -> M2 -> M3 Different floor Different tower Main entrance

12 Training Setup Self reported ability of spatial cognition – Santa Barbara Sense of Direction (SBSOD) – Score is used to adjust system behavior Vision system-webcam-tablet PC Resolution of 640*480 at 8 frames/sec Send scenes to PC Human assistant=>Explicit help/confusion

13 SBSOD

14

15 Localization using cognitive visual scenes A task-specific route model is constructed – Visual scenes are captured continuous and used to build the task specific route model Once route model is built, determine location Two cues – Image categorization Image-driven localization can achieve accuracy upto 84% – Time

16 Image categorization

17

18 Improve localization using temporal inference Localization using cognitive visual scenes Tackle the dynamics of wayfinding – Various walking time

19 Improve localization using temporal inference Localization using cognitive visual scenes – L i – t i 0 – Rand(p) random Number between 0 and p First Compute Probability that a scene is associated with that segment

20 Interaction Design with Context- awareness Context-aware navigation instructions – Provide effective navigation aids – Determine a decision point D j and associate a probability value P j related with number of subjects n who requested help – What if the user do not comply to the aids?

21 Interaction Design with Context- awareness Context-aware navigation instructions – TT j-1 narration at decision point D j – TT j-2 rephrased narration

22 Interaction Design with Context-awareness

23 Context-aware navigation instructions – Utility of the instructions are measured using SBSOD score – C p denotes the spatial cognitive level – At time t k the navigation instruction is provides as per the rules below

24 Experimental Evaluation Participants – 12 participants (6 M, 6 F) Experiment Design – SBSOD score as input for each participant – Task1 : M0->M1, Task2: M1->M2; Task3: M2-> M3 – At different order for different participant Measures Hypotheses

25 Experimental Evaluation Measures – Objective

26 Experimental Evaluation Measures – Subjective

27 Experimental Evaluation Hypotheses

28 Results Task Performance – One-way ANOVA – Posthoc analysis with paired t-test

29 Results Subjective Evaluation – Easy-to-use – Reduced cognitive load – More intelligent – CNG has poor performance in terms of enjoyment, stress and trust


Download ppt "Wearable Virtual Guide for Indoor Navigation. Introduction Assistance for indoor navigation using a wearable vision system Novel cognitive model for representing."

Similar presentations


Ads by Google