Download presentation

Presentation is loading. Please wait.

Published byThomas Dixon Modified over 3 years ago

1
Adaptive Methods Research Methods Fall 2008 Tamás Bőhm

2
Adaptive methods Classical (Fechnerian) methods: stimulus is often far from the threshold inefficient Adaptive methods: accelerated testing –Modifications of the method of constant stimuli and method of limits

3
Adaptive methods Classical methods: stimulus values to be presented are fixed before the experiment Adaptive methods: stimulus values to be presented depend critically on preceding responses

4
Adaptive methods Constituents –Stepping rule: which stimulus level to use next? –Stopping criterion: when to finish the session? –What is the final threshold estimate? Performance –Bias: systematic error –Precision: related to random error –Efficiency: # of trials needed for a specific precision; measured by the sweat factor

5
Notations X n stimulus level at trial n Z n response at trial n Z n = 1detected / correct Z n = 0not detected / incorrect φtarget probability absolute threshold: φ = 50% difference threshold: φ = 75% 2AFC: φ = 50% + 50% / 2 = 75% 4AFC: φ = 25% + 75% / 2 = 62.5% x φ threshold

6
Adaptive methods Classical methods: stimulus values to be presented are fixed before the experiment Adaptive methods: stimulus values to be presented depend critically on preceding responses X n+1 = f(φ, n, Z n, X n, Z n-1, X n-1,…, Z 1, X 1 )

7
Adaptive methods Nonparametric methods: –No assumptions about the shape of the psychometric function –Can measure threshold only Parametric methods: –General form of the psychometric function is known, only its parameters (threshold and slope) need to be measured –If slope is also known: measure only threshold

8
Nonparametric adaptive methods Staircase method (aka. truncated method of limits, simple up-down) Transformed up-down method Nonparametric up-down method Weighted up-down method Modified binary search Stochastic approximation Accelerated stochastic approximation PEST and More Virulent PEST

9
Staircase method Stepping rule: X n+1 = X n - δ(2Z n - 1) –fixed step size δ –if response changes: direction of steps is reversed Stopping criterion: after a predetermined number of reversals Threshold estimate: average of reversal points (mid-run estimate) Converges to φ = 50% cannot be used for e.g. 2AFC

10
Transformed up-down method Improvement of the simple up-down (staircase) method X n+1 depends on 2 or more preceding responses –E.g.1-up/2-down or 2- step rule: Increase stimulus level after each incorrect response Decrease only after 2 correct responses φ = 70.7% Threshold: mid-run estimate 8 rules for 8 different φ values (15.9%, 29.3%, 50%, 70.7%, 79.4%, 84.1%) reversal points

11
Nonparametric up-down method Stepping rule:X n+1 = X n - δ(2Z n S φ - 1) –S φ : random number p(S φ =1) = 1 / 2φ p(S φ =0) = 1 – (1 / 2φ) –After a correct answer: stimulus decreased with p = 1 / 2φ stimulus increased with p = 1 - (1 / 2φ) –After an incorrect answer: stimulus increased Can converge to any φ 50%

12
Nonparametric up-down method

13
Weighted up-down method Different step sizes for upward (δ up ) and downward steps (δ down )

14
Modified binary search Divide and conquer Stimulus interval containing the threshold is halved in every step (one endpoint is replaced by the midpoint) Stopping criterion: a lower limit on the step size Threshold estimate: last tested level Heuristic, no theoretical foundation Figure from Sedgewick & Wayne

15
Stochastic approximation A theoretically sound variant of the modified binary search Stepping rule: –c: initial step size –Stimulus value increases for correct responses, decreases for incorrect ones –If φ = 50%: upward and downward steps are equal; otherwise asymmetric –Step size (both upward and downward) decreases from trial to trial Can converge to any φ

16
Stochastic approximation

17
Accelerated stochastic approximation Stepping rule: –First 2 trials: stochastic approximation –n > 2: step size is changed only when response changes (m reversals : number of reversals) Otherwise the same as stochastic approximation Less trials than stochastic approximation

18
Accelerated stochastic approximation

19
Parameter Estimation by Sequential Testing (PEST) Sequential testing: –Run multiple trials at the same stimulus level x –If x is near the threshold, the expected number of correct responses m c after n x presentations will be around φn x the stimulus level is changed if m c is not in φn x ± w –w: deviation limit; w=1 for φ=75% If the stimulus level needs to be changed: step size determined by a set of heuristic rules Variants: MOUSE, RAT, More Virulent PEST

20
Adaptive methods Nonparametric methods: –No assumptions about the shape of the psychometric function –Can measure threshold only Parametric methods: –General form of the psychometric function is known, only its parameters (threshold and slope) need to be measured –If slope is also known: measure only threshold

21
Parametric adaptive methods A template for the psychometric function is chosen: –Cumulative normal –Logistic –Weibull –Gumbel

22
Parametric adaptive methods Only the parameters of the template need to be measured: –Threshold –Slope

23
Fitting the psychometric function 1.Linearization (inverse transformation) of data points Inverse cumulative normal (probit) Inverse logistic (logit)

24
Fitting the psychometric function 2.Linear regression 3.Transformation of regression line parameters X-intercept & linear slopeThreshold & logistic slope

25
Contour integration experiment D = 2 slope = -0.6 D = 65 slope = 0.3

26
Contour integration experiment 5-day perceptual learning

27

28
Adaptive probit estimation Short blocks of method of constant stimuli Between blocks: threshold and slope is estimated (psychometric function is fitted to the data) and stimulus levels adjusted accordingly –Assumes a cumulative normal function probit analysis Stopping criterion: after a fixed number of blocks Final estimate of threshold and slope: re-analysis of all the responses

29
Adaptive probit estimation Start with an educated guess of the threshold and slope In each block: 4 stimulus levels presented 10 times each After each block: threshold ( ) and slope ( ) is estimated by probit analysis of the responses in block Stimulus levels for the next block are adjusted accordingly –Estimated threshold and slope applied only through correction factors inertia

30
Measuring the threshold only Function shape (form & slope) is predetermined by the experimenter Only the position along the x-axis (threshold) needs to be measured Iteratively estimating the threshold and adapting the stimulus levels Two ways to estimate the threshold: –Maximum likelihood (ML) –Bayes estimation QUEST, BEST PEST, ML-TEST, Quadrature Method, IDEAL, YAAP, ZEST

31
Maximum likelihood estimation Construct the psychometric function with each possible threshold value Calculate the probability of the responses with each threshold value (likelihood) Choose the threshold value for which the likelihood is maximal (i.e. the psychometric function that is the most likely to produce such responses)

32
Bayes estimation Prior information is also used –Distribution of the threshold in the population (e.g. from a survey of the literature) –The experimenters beliefs about the threshold a priori distribution of the threshold values of the psychometric functions at the tested stimulus levels

33
Bayes theorem (1764) A simple rule to turn around a conditional probability: Application for statistical inference:

34
Bayesian inference a posteriori probability / likelihood: estimates unknown physical parameters based on known observations conditional probability: predicts unknown observations based on known parameters a priori probability: prior knowledge, top-down effects (normalizing constant)

35
Bayesian coding hypothesis 1.Neural information representation is probabilistic (not deterministic) 2.At each stage of local computation: instead of decisions, a representation of all possible values of the parameter & their corresponding probabilities (obtained by Bayesian inference) 3.Final decision: can be either the mean, the mode, etc. of the probability distribution Advantages: –no need to commit too early to particular interpretations –uncertainty of information is taken into account (e.g. in cue integration)

36
Bayesian coding hypothesis Knill, Pouget TiNS 2004

37
Bayesian coding hypothesis Bayesian theory predicts psychophysical data well (esp. perceptual biases and cue integration) Bayesian computational models: successfully applied in e.g. machine vision & speech recognition Neurophysiology: only sparse results about the coding of uncertainty in neuronal populations

Similar presentations

© 2017 SlidePlayer.com Inc.

All rights reserved.

Ads by Google