Presentation is loading. Please wait.

Presentation is loading. Please wait.

Perception in 3D Computer Graphics

Similar presentations


Presentation on theme: "Perception in 3D Computer Graphics"— Presentation transcript:

1 Perception in 3D Computer Graphics
CENG 505: Advanced Computer Graphics

2 Motivation Çok rendering yavaş olur! (told by?)
Take perception into account Don’t waste resources Render less

3 Subtopics Visual Attention Stereo Vision Depth Perception
Top-down, bottom-up perception Stereo Vision Fusion Depth Perception Depth cues Color perception

4 Introduction Utilize perceptual principles in CG
It is more important what is perceived than what is drawn into the screen Do not waste resources for not perceived details Determine visually important regions Visual attention point of view Where do we attend? How to utilize this information in CG?

5 Introduction Look at the picture below

6 Introduction What were the letters: Here? And here? Easier to remember
“T” than “J” Because blue circle pops out

7 Introduction Our aim: We propose
Finding visually attractive (and important) regions in a computer graphics scene? Utilizing this information to optimize computer graphics We propose Models to predict visual attention Visual-attention based means to optimize 3D stereo rendering

8 Background Visual attention mechanism Bottom-up Top-down
Object properties, saliency Top-down Task, prior experiences etc. viewer scene

9 Bottom-up Component & Saliency
Stimuli-driven Unintentional Faster (25 – 50 ms per item) Saliency: the property of objects representing their visual attractiveness Difference rather than strength Salient compared to background

10 Saliency Color difference

11 Saliency Not a specific color: e.g., red is not salient by itself

12 Saliency Color difference

13 Saliency Motion

14 Saliency ... and other properties for example:
hue luminance orientation shape

15 Bottom-up Component Center-surround mechanism Saliency is related to
difference between fine and coarse scales

16 Top-down Component Task-driven Prior experiences Intentional
Slower (>200 ms)

17 Top-down Component Eye movements on Repin’s picture [Yarbus67]

18 Top-down Component Without a task

19 Top-down Component Task: What are the ages of the people?

20 Top-down Component Task: What were they doing before?

21 Bottom-up vs. Top-down Both components are important
Top-down component: Difficult to know tasks Highly related to personalities, prior experiences etc. May require semantic knowledge - object recognition too Our primary interest is the Bottom-up component Purely based on scene properties

22 Prev. Work: Saliency Models
2D: Computational modeling of saliency (Itti & Koch 98) Image based Center-surround Luminance, orientation, color-opponency 3D: Mesh saliency (Lee et al. 05) 3D Static models Mean curvatures

23 Prev. Work: Saliency Applications
Saliency based simplification (Lee et al. 2005) Viewpoint selection Direct user’s attention (Kim & Varshney 2008) Thumbnail generation (Mortara & Spagnuolo 2009)

24 Prev. Work: Saliency Applications
Cubist Style Rendering (Arpa et al. 2012) Caricaturization (Cimen et al. 2012)

25 Several studies

26 Proposed Models Saliency Calculation Models
PVS: Per-vertex saliency model Which vertex is salient? POS: Per-object saliency model Which object is salient? EPVS: Extended PVS Attention-based Stereoscopic Rendering Optimization

27 1/3) Per-Vertex Saliency (PVS)

28 PVS – Feature Extraction
Per-vertex features Mean curvature: Meyer et al.’s approach (Meyer02) Velocity: Acceleration: Hue: Luminance: lum = (r + g + b)/3 Color Opponency: Red-green, blue-yellow As described in (Itti et al. 1998)

29

30 PVS – Generate Feature Maps
A feature map: Stores center-surround differences of Gaussian weighted averages: For: s: surround level, c: center level, f: feature (curvature, velocity, etc.), i: vertex i

31 PVS – Center-surround scales
Fibonacci sequence enables reuse: 8 Neighborhood → 13 center surround scales ε = * diagonal of the bounding box Small scales express difference in small neighborhood Large scales express difference in larger neighborhood | 2ε - 3ε | | 8ε - 21ε | | 2ε - 5ε | | 13ε - 21ε | | 3ε - 5ε | | 13ε - 34ε | | 3ε - 8ε | | 21ε - 34ε | | 5ε - 8ε | | 21ε - 55ε | | 5ε - 13ε | | 34ε - 55ε | | 8ε - 13ε |

32

33 PVS – Normalize & Synthesize
Each feature map is normalized in itself Itti et al.s Normalization operator (Itti98) Promotes uniquely salient regions Suppresses homogeneously distributed saliency values For each feature Maps of different scales are linearly added Saliency maps for each feature are extracted

34 PVS – Separate Saliency Maps
geometry velocity acceleration velocity + acceleration

35 PVS – Separate Saliency Maps
cloth model hue color opponency luminance

36 Models Saliency maps

37 2/3) Per-Object Saliency (POS)
Saliency Computation for Each Object Motion based States of Motion Motion by itself does not attract attention Motion properties attract attention

38 Per-Object Saliency Model (POS)
States of motion Preview

39 POS – Pre-experiment Eye tracker experiment: Motion states vs. User attentions

40 POS – Framework

41 POS – Dominancy of states

42 POS – Individual Attention
Saliency values after state occurrences initially inhibition of return finally

43 POS – Global Attention Gestalt grouping Gestalt for motion

44 3/3) Extension to PVS: EPVS
How to apply POS model to single meshes (like PVS) Not for objects, but for vertices Gestalt psychology: Motion based clustering Clusters as objects, (Now, POS model is applicable)

45 EPVS – Motion based clustering
Differential velocity Difference of velocity compared to surroundings High differential velocity on boundaries Clustering through all frames

46 EPVS – Motion based clustering
Clustering Examples

47 EPVS – Saliency for clusters
Identify cluster heads Cluster head: Best representative vertex for a cluster Apply POS model to cluster heads Velocity based saliency weigthing Differentiates a strong motion onset from a weak one

48 EPVS – Results

49 Applications - Simplification
Original Simplified: QEM Simplified: Saliency preserving

50 Applications – Viewpoint Selection

51 Stereoscopic rendering
Binocular Vision Different images for each eye Powerful depth cue In Computer Graphics Separate left & right views Performance decrease Doubles rendering time

52 Binocular Suppression
Proper views Binocular Fusion Improper views Binocular Rivalry Stronger view dominates Local Objects are stronger than background Left view Right view Combined percept

53 Mixed quality rendering

54 Proposed Hypothesis Original Pair Optimized Pair High Quality LEFT
High Quality RIGHT High Quality LEFT Low Quality RIGHT Calculate intensity contrast change [SaliencyHigh > SaliencyLow] [SaliencyHigh < SaliencyLow] High quality suppresses Low quality suppresses

55 Intensity Contrast Intensity contrast = Saliency on luminance channel
Followed Itti’s Approach for center-surround operations 6 DoG maps (2-5, 2-6, 3-6, 3-7, 4-7, 4-8)

56 Example Case: Blur Original Blurred

57 Example Case: Blur Contrast increased 2 to 1 Contrast decreased 2 to 1
Intensity contrast map (original) Intensity contrast map (blurred) Intensity contrast change 57/72

58 Example Case: Specular reflection
Specular on Specular off 58/72

59 Example Case: Specular reflection
Intensity contrast map (with specular) Intensity contrast map (without specular) Contrast increased 2 to 1 Contrast decreased 2 to 1 59/72 Intensity contrast change

60 Evaluations PVS Model POS Model EPVS Model
Stereo Rendering Optimization

61 1) PVS Model Eye tracker usage (Tobii 1750 eye tracker)
3 short video sequences: ~15 seconds each Gathered fixation points: 12 subjects Analyzed the salience of fixation points A circular neigborhood of radius r (r = 5% of visible region) Tolerate the accuracy problem of the eye tracker Brings results closer to average (blurring effect)

62 PVS Model Saliency rank of fixation points
17%, 21%, 19% Fixation points have higher saliency values 17% 21% 19%

63 PVS Model Comparison with random users
100 random users looking at random points Real fixations have significantly higher saliency than random t-test (p < 0.05)

64 2) POS Model 20 dynamic objects Upon a color change
Press a button Salient objects vs. insalient objects

65 POS Model 16 Subjects T-test: statistically significant difference between (p < 0.05) Highly salient objects vs. random objects Highly salieny objects vs. Lowly salient objects

66 3) EPVS Model Same setup with PVS Experiment
Successful for cases where motion saliency dominates

67 4) Stereo rendering optimization
Mixed rendering approach vs. Traditional approach Evaluated by t-test (p < 0.05) 61 Subjects 8 Computer Graphics Methods are tested

68 Methods-1/2 A B

69 Methods-2/2 A B

70 RESULTS Blur MeanReference MeanTest Reference: {L1-R1, L2-R2, L3-R3}
Test: {L1-R2, L1-R3, L1-R4} 1: Original … 4: Strongly blurred 1-2 1-3 1-4 Specular Reference: {Loff-Roff, Lon-Ron} Test: {Lon-Roff} on: has specular effect off: hasn’t specular effect 70/72 On-off

71 Results Method Inferences Blur Original image suppresses.
Not enough to apply to single view Upsampling High resolution image suppresses. Upsampling method is important. Antialiasing Antialiased view is suppressed. Not suitable for applying only single view. Specular Specular highlight suppresses. Good optimization Shading Phong shading suppresses because of the specular. Suitable Mesh simplification Using different meshes decrease the comfort # of vertices and simplification method are effective. Texture Similar to upsampling High resolution suppresses Shadow Shadowing one view is not enough. If both views are the same (shadow on/off), comfort is better.

72 Conclusions Several Saliency Models
Saliency information could be used in quality evaluation How to incorporate top-down attention? Bottom-up as pre-process Top-down in real time according to current task Attention-based Stereoscopic Rendering Optimization Each method could be investigated in more depth What to do for multi-view displays (more than two views)

73 Thanks a lot everyone! Any questions? Saliency based best-view
Optimized stereo pair Thanks a lot everyone! Any questions?


Download ppt "Perception in 3D Computer Graphics"

Similar presentations


Ads by Google