Presentation is loading. Please wait.

Presentation is loading. Please wait.

Analysis of Pupillary Data Society for Psychophysiological Research Workshop, Boston, September 14, 2011.

Similar presentations


Presentation on theme: "Analysis of Pupillary Data Society for Psychophysiological Research Workshop, Boston, September 14, 2011."— Presentation transcript:

1 Analysis of Pupillary Data Society for Psychophysiological Research Workshop, Boston, September 14, 2011

2 Approaches to Pupillary Data Analysis Scale data to mm (can be performed at several points up to statistical analysis, discussed earlier) Identify and correct for blinks and other artifacts Smooth data through filtering algorithms Segregate trials according to conditions (for repeated measures designs) Identify stimulus onset for stimulus-locked analyses Identify time of RT for response-locked analyses Apply signal averaging for repeated measures designs as appropriate Determine critical variables to be used for analyses (absolute diameters, amplitude of change, peak responses, time of critical responses, frequency analyses, etc. Consider more sophisticated approaches including: Principal Components Analysis, Fourier/Wavelet Analyses, waveform comparison models

3 Notes on Pupillary Signal Averaging Signal averaging approaches may be applied to pupillary data in repeated measures experiments just as they are in event-related potential experiments, with all of the same caveats The advantage is that small changes on the order of 0.01 mm or better (sometimes as detailed as several one-thousandths of a mm) can be reliable obtained. The pupil is affected by many types of ongoing changes in visual stimulation, CNS activity, etc., which make signal averaging advantageous, but has a higher signal-to-noise ration than the EEG, so that fewer trials are needed to extract a good averaged response, perhaps with minimums of: –3-5 trials for light reactions –5-10 trials for motor-initiated dilations –5-10 or more trials for other cognitive and emotional events –Note that Orienting or highly novel events are limited in the number of trials that can be obtained because of the nature of the task

4 Pupillary Response at Visual Threshold G. Hakerem & S. Sutton, Nature, 1966

5 Pupillary Light Reactions A single pupillary light reaction is not likely to be a good representation; usually at least 3-5 responses should be averaged The light reaction in darkness is a function of both stimulus characteristics and time of dark adaptation When a series of light stimuli are presented, the first response will start at a larger diameter and show a stronger contraction than subsequent stimuli. Unless standard stimuli and interstimulus intervals are employed, data are likely to be interpretable only through complex non-linear analyses, not simple averaging across all stimuli. At high rates of visual stimulation, pupillary responses become integrated (i.e., individual light reactions not identifiable).

6 (for description of multiple measures, see Steinhauer et al., Psychophysiology, 1992)

7 BaselineStart of Contraction Maximum Velocity Minimum Diameter Redilation

8 Correcting for Artifacts Identify changes in diameter between subsequent points that are too large to be physiologically meaningful, or where pupil diameter is out of range (e.g., scaled value is 0 or 11 mm) If the blink/closure/artifact occurs at a critical time (during a brief visual stimulus, at the minimum of constriction, at the peak of a dilation interval) then it is usually best to discard the trial entirely To apply a linear correction, identify both the last good data point before the artifact (p 1 ) and the first good data point after the artifact (p L ). Take the difference between them (D = p L - p 1 ), divide by the number of points in the artifact plus one (D / (n bad +1)), and add this amount to p 1 incrementally for each of the substituted points. In our laboratories, we have found linear interpretation to be an appropriate and easily implemented correction to most types of data, although several researchers have employed more complex curve fitting to individual data.

9 Filtering Data (1) Why filter if the pupil is a (relatively) slowly changing measure? Except for some light reactions, factors such as magnification, measurement, and signal processing introduce noise aspects that are unrelated to true pupillary changes. Filtering eliminates small artifacts that can be seen in the electronically derived continuous pupil waveform. Should filtering be performed before or after signal averaging? Actually, it does not matter computationally, but makes subsequent artifact correction and component identification easier if performed first. Do not filter if your are concerned about very precise time transients (e.g., start of light reaction to nearest msec) as filtering smooths any sharp peaks and may shift some peak activity slightly.

10 Filtering Data (2) There are a number of different approaches to filtering. We will describe a simple procedure described by Ruchkin & Glaser (1978). Each point is recalculated beginning with the original data set, using a band of points centered on each original point. The larger the number of points, the greater the effective filter. Assume an initial frequency of 60 Hz (16.7 msec between samples) The frequency of the filter f 0 is characterized as 1/[(2L+1)T], where L is the number of points below and above the center point, and T is the sampling interval (16.7 ms). Ruchkin and Glaser noted that the transfer function has a high secondary peak, so that the “first pass” is repeated again on the same data, thus compromising a “two pass” filter. The half power frequency (or -3 dB reduction) yields a gain of 0.707, so that the effective bandpass is 0.44 * f 0. Ruchkin, D.S. & Glaser, E.M.,1978

11 To calculate filter characteristic: a) multiply sampling rate by 0.44; for 60 Hz, this equals 60 Hz * 0.44 = 26.4 Hz b) divide by the number of points (2L+1) c) Thus, for data recorded at 60 Hz, a 3 point filter -> Low pass of 8.8 Hz a 5 point filter -> Low pass of 5.3 Hz a 7 point filter -> Low pass of 3.8 Hz a 9 point filter -> Low pass of 2.9 Hz Filtering Data (3)

12 To operationalize filter (here for L=1, 3 point filter) 1) set up 3 arrays a, b, c 2) read data into array a 3) for each point x in array a, substitute the mean of x a -1, x a, and x a+1 for x b in array b; that is, x b = (x a -1+ x a + x a +1)/3 (repeat leading and trailing values for first and final points) 4) repeat, calculating array c values from array b This filter results in no phase shifts. Remember that high and low amplitude values will be attenuated Filtering Data (4)

13 Tonic Data In some limited cases, only overall average diameter is needed (is pupil larger for different sustained tasks, is it smaller at end of a session?). Define a specific interval and take the average of points, possibly report variation.

14 Phasic Data (data of D. Friedman et al., EEG Cl. Neuro., 1973) Prestimulus or Baseline: Often use average of at least 1000 msec, minimum 200 msec, prior to stimulus. Peak Dilation: Often seen msec after critical stimulus. Use either peak or average of several points surrounding peak. During recording in light, earlier peaks or increases in diameter related to inhibition of the parasympathetic system may be seen. Often, integrated activity over much longer durations may be desired

15 Digit Span Task Often, activity over much longer durations may need to be analyzed

16 Probability Effects, Auditory Counting Task

17 Probability Effects, Auditory Choice Reaction Task

18 Principal Components Analysis Data for entire waveforms are entered into PCA, typically each average, each condition, each subject (can also be performed for individual subjects) Factors are based on similar variance effects across conditions, may or may not correspond to peaks or sustained activity periods Factors are extracted in decreasing order of variance explained Factor loadings reflect influence of variables across time points A factor score – representing time by loading effects – is obtained for each factor for each waveform Factor scores may be used in repeated measures analyses

19

20

21 Steinhauer & Hakerem, 1992

22

23 Pupil: Time Frequency Analysis Pupillary Sleepiness Test Franzen et al (in prep) seconds

24 Hypothesis: Decreased oscillatory activity in patient groups would be indicative of disruption of central parasympathetic control. Pupil diameter was assessed in darkness for 11 minutes as subjects fixated three small red LEDs. The pupil was digitized 60 times/sec with a resolution of 0.05 mm using an ISCAN ETL-400 system. Initial Data Reduction: The pupil was first corrected for blinks (S1a), then smoothed and detrended (correcting for baseline, S1b). PUI can be computed as the absolute change in diameter across time, essentially a string measure of activity, but this includes all frequencies of change. However, Ludtke et al. (1998) suggested using low frequency slow changes only. A wavelet analysis implemented in MatLab assessed frequencies at each sampling point, expressing PUI as the sum of power in the Hz frequency range (S1c). Wavelet analysis of Oscillatory Activity S1a S1c S1b

25 Power Derived from Wavelet Analysis

26 MATLAB IMPLEMENTATIONS Thanks to Greg Siegle, Ph.D. Depts. Of Psychiatry and Psychology University of Pittsburgh

27 The environment Matlab is a general purpose language for mathematical and graphical operations All operations can be done interactively or by calling stored functions The basic computational unit is a row x col matrix x=[1 2 3;4 5 6] x = Scalers (e.g., 3) and vectors (e.g., [3 4 5]) are supported

28 Plotting pts=0:pi/20:2.*pi; x=cos(pts); y=sin(pts); plot(sin(pts)); plot(pts,y); plot(y); plot (x,y); axis on/off, clf xlabel(‘time’); ylabel(‘dilation’); title(‘my graph’); plot([x; y]’); plot([x; y]); legend (‘cos’,’sin’); Specialized plots, e.g., errorbar(pts,y,x); –hist(pts) figure; figure(2); subplot(2,2,1); plot(x,y);

29 Functions Each function, accessible as a command, is stored in its own file e.g., storing a function called parabola.m function y=parabola(x) % squares its input % usage: y=parabola(x) y=x.^2; Allows you to say “plot(parabola (1:10))” and help (parabola)

30 Operations in functions if a>2 b=4; else b=5; end For ct=1:5 b=b+1; end while ct<5 b=b.^2; ct=ct+1; end Scientific Functions Trig: sin, cos, tan, asin, acos, atan, sinh, cosh, tanh, asinh, acosh, atanh Rounding: floor, ceil, round, fix Modular: rem, mod Expon.: exp, log, log2, log10, sqrt Primes: factor, primes Matrix: det, inv, pinv, eig, svd, fft and many more Polynomials: roots, polyfit, polyval

31 The Pupil Toolkit Greg Siegle, Ph.D. University of Pittsburgh, School of Medicine The Pupil Toolkit

32 Goals Be able to read in and process a single subject’s pupillary data in 1 step. Have average graphs to show subjects directly after running them Have relevant diagnostic information produced Have a stock set of statistics automatically calculated on pupil waveforms when they are read in. Make it easy to analyze new experiments The Pupil Toolkit

33 Standard flow for 1- subject’s pupil data Read in the data Clean the data (i.e., eliminate blinks) Segment the data into trials (from markers or eprime) Drop bad trials Make condition-related averages Graph the condition-related averages The Pupil Toolkit

34 Single subject diagnostic quality control graphs available 1 minute after testing Or run a stored procedure unique to your experiment, e.g. p=procsilkkvidexample(fname); Which: 1) reads the data file via p=readiscan(fname) 2) preprocesses it via p=stublinks(p) 3) segments it into trials via p=segmentpupiltrials(p) 4) calculates statistics via p.stats=pupiltrialstats(p) via p=readiscan(‘1002kvid.isc’); The Pupil Toolkit

35 Raw data After trial segmentation >> p=procsilkkvidexample(1002) found records in data dropped 1 of 51 trials p = FileName: '1002KVID.isc' header: [1x1 struct] EventTrain: [34280x1 double] RescaleData: [1x34280 double] BlinkTimes: [34280x1 logical] NoBlinks: [34280x1 double] NoBlinksUnsmoothed: [1x34280 double] NoBlinksDetrend: [34280x1 double] EventTicks: [156x1 double] EventCodes: [156x1 double] RescaleFactor: 60 EventTimes: [156x1 double] AllSeconds: [34280x1 double] TrialStarts: [51x1 double] TrialEnds: [51x1 double] TrialLengths: [51x1 double] NumTrials: 51 StimLatencies: [51x1 double] StimTypes: [51x1 double] TrialTypesNoDrops: [51x1 double] PupilTrials: [51x661 double] EventTrials: [51x661 double] BlinkTrials: [51x661 double] DetrendPupilTrials: [51x661 double] NormedPupTrials: [51x661 double] NormedDetrendPupTrials: [51x661 double] TrialSeconds: [1x661 double] Suspect: [1x51 logical] stats: [1x1 struct] drops: [1x51 logical] TrialTypes: [51x1 double] Conditions: [1 2 4] ConditionMeans: [3x661 double] ConditionSds: [3x661 double] condstats: [1x1 struct] The Pupil Toolkit Valence Identification Task

36 p=procvfg(5107,'numorder',2); The Pupil Toolkit Digit Sorting Task (sort 3, 4 or 5 numbers)

37 Reading data from different pupilometers ISCAN –readiscan –readiscan05text –readiscanbehavobstext ASL – to use must first install ASL matlab drivers –readasl2006 –readasl2007 –readasl2008 –readasltext –readasltextlunalab The Pupil Toolkit

38 How the blink elimination routine works data = stublinks(data,graphics,lrtask,manualblink s,lowthresh) Smooths the data using a 3 pt flat filter applied twice Identifies blinks via many criteria (e..g, >.5mm change in 1 sample) Kills single good values between blinks, Interpolates values between blinks Fixes blinks at beginning and end of data collection The Pupil Toolkit

39 Graphs of a single subject’s data – all trials showpupiltrials(p) plotpupiltrialmatrix(p); threedpupilgraph(p); rqplottrialmatrix(p) pupilautocorrgraph(p) The Pupil Toolkit

40 Single subject’s data – aggregate waveforms available 1 minute after testing via rqplotaggpupilcondmeans(p) Start with plotpupilcondmeans(p) The Pupil Toolkit

41 Aggregate data – available after some work GRPPR Control - nonpers Control - persrel Depressed - nonpers Depressed - persrel The Pupil Toolkit

42 Statistics available on every trial via p.stats=pupiltrialstats(p) Whole trial: % blinks, blink_in baseline, mean amplitude, slope Trial Segments: mean baseline amplitude Relative to peak: peak, peak latency,, Slope post peak Within a window: mean amplitude, peak, peak latency, max, min, slope Relative to stimulus: Peak amplitude post stimulus, peak latency post stimulus, Relative to rt: peak amplitude post rt, slope post rt The Pupil Toolkit

43 Example – the face mask study Question – do different masks cause differential light reflexes following faces? Data in facemask\data\fmpall.mat

44

45 The masks blank pentagonsswishes faceblur shuffledfractal

46 1000 – Aggie – grand average mask face

47 1000 – Aggie – condition means mask face p=procfacemask(1000)

48 1001 – Greg – condition means mask face p=procfacemask(1001)

49 Mean of 13 subjects Metrics: flat & low dip peak-trough and similar median to the face. Winner: swishes maskface fmaggmeans(pall,1)

50 Comparing waveforms Comparing waveforms is not trivial We have implemented functions for computing tests at every sample along the waveform. Unless these comparisons are a-priori I recommend using these only in the context of a group x time or condition x time interaction done on a dimension- reduced dataset. Controlling type I error –Guthrie & Buchwald’s (1991) technique Guthrie D, Buchwald JS (1991): Significance testing of difference potentials. Psychophysiology 28: –Blair & Karniski’s (1993) technique Blair RC, Karniski W (1993): An alternative method for significance testing of waveform difference potentials. Psychophysiology 30: The Pupil Toolkit

51 T-tests at every sample, marking intervals significantly long enough to care about via Guthrie & Buchwald (1991) seconds Never Depressed (41) Unmedicated Depressed (47) Regions of significant differences. Yellow is p

52 Pupil toolkit functions to implement Guthrie & Buchwald’s (1991) technique gutautocorr –gives the autocorrelation (acorr) of waveforms in matrix X (subjects x samples) after removing k principal components –[acorr,kmin]=gutautocorr(X,k) gsgutsims –Runs simulations for Guthrie and Buchwald’s (1991) technique to yield minimum # of consecutive tests necessary for a difference to be considered significant at p<.05 –[minlen]=gsgutsims(N,T,sig,auto,numsims) N=# subs, T= sampling interval, sig = target waveform-wise significance (usually 0.05), auto = autocorrelation in the data, numsims = # of simulations (default = 1000) gsgutsimsbetween(Ng1,Ng2,T,sig,ro,numsims) The Pupil Toolkit

53 Pupil toolkit functions to implement Blair & Karnitski’s technique getblairkarniskitmax –[tmaxthresh,p05tmax] = getblairkarniskitmax(data,group,sigthresh) –generates all permutations of data to conditions and for each permutation does a t-test at each time-point. –We then select the tmax for which 95% (or other threshold) of the permutations are rejected, such that 95% of the permutations have NO significant t-tests –We then apply that threshold to the successive t-tests in our waveform of interest. The Pupil Toolkit

54 Functions to compare waveforms: Within Subjects diffwavgraph –Contrasts 2 conditions via t-tests at each sample –[s,h]=diffwavgraph(wavcond1,wavcond2,samprate,res amprate,alpha,outliers,patchlen,bw,pscale,wavcond3, pscalemag,xax,linewidth) –NOTE: This is the only function in the set which is well documented… If you get this you’ll get the rest… condwavgraph –contrasts all conditions within subjects via anova at each sample –[s,h]=condwavgraph(condwavs,samprate,resamprate, alpha,outliers,patchlen,bw,pscale,xax) The Pupil Toolkit

55 diffwavgraph help file graphs cond1 v. cond2 expects 2 matrices, each with subjects in rows and wavs in columns and significance of difference for each time point usage: s=diffwavgraph(wavcond1,wavcond2,samprate,resamprate,alpha,outliers,patchlen,bw,pscale,wavcond3,pscalemag,xax,linewidth) wavecond1: this is the matrix of N rows for condition 1 wavecond2: this is the matrix of N rows for condition 2 samprate: the sampling rate, in Hz resamprate: The rate at which to resample the data - usually the same as samprate –You can leave the samprate and resamprate the same and the routine will run fastest, and in the most principled way. The reason to consider resampling is to decrease the autocorrelation in the data. The more you resample, the "rougher" the data will be, and thus the less points you'll need in a row to get significance. So, in case you play with resampling in getting the autocorrelation, I let you throw that in as a parameter... alpha: threshold for significance, usually set to.05 or.1 –And like Guthrie and Buchwald, I like the.1 threshold. That said, if regions I like are not coming out, I often like to see what the actual significance of patches "would be" so that I can know whether it's a power issue. Towards that end, I'll often play, in the privacy of my darkened office with the door locked, with thresholds of.3 or.5... outliers: This is an easy way to recompute patches with specific people taken out, just to see if things change. By default it should be a vector with N rows and one column of all zeros. If you put ones on any row, those people are not counted in computing mean waveforms or significance tests. patchlen: the length of consecutive data points required for sig –note: patchlen does NOT account for resamprate. So, even if you downsample to 1hz, a patchlen of 17 refers to 17 points in a row in the original sampled space. Thus you must change it yourself in the calling routine... bw: whether or not graphs should be in black and white –For display on the screen, set bw=0 and graphs will appear in color. For publications in which color is costly, set bw=1 and it will make your graphs in black and white, with dotted lines as necessary. pscale: This should be zero for most applications. That will set the significance bars to a uniform height. If pscalemag is not zero, the significance bars are of the height of the p-value, scaled by pscale. wavcond3: This is an optional 3rd condition, which is plotted but not included in tests. Set it to zero if there is no third condition. pscalemag: –This is how large the bars for significance should appear under the x axis (if pscale =0). So if the y axis goes from -10 to 10, you might make pscale = 1. But if the y axis goes from -.1 to.1 you might make pscale = making pscalemag negative puts the significance bars below the x axis xax: –This is the units you want on the x-axis. By default it puts the x-axis in seconds. But if you want it in ticks, pass a vector which counts from 1:wavelen. linewidth: This is how wide the lines are on the plots. 0.5 by default. If you want to thicken then up, e.g., for a poster, pass in values > 0.5 Function by Greg Siegle, Ph.D. cite as Siegle, G. J. (2003) The Pupil Toolkit, Available directly from the author, as used in, e.g., Siegle GJ, Steinhauer SR, Carter CS, Ramel W, Thase ME (2003): Do the seconds turn into hours? Relationships between sustained pupil dilation in response to emotional information and self-reported rumination. Cognitive Therapy and Research 27: The Pupil Toolkit

56 Are the conditions different? 0.57 to 6.08: F(5,8)=12.57, p=0.00 fmaggmeans(pall,2)

57 Toolboxes Toolboxes provide lots of functionality in a specific domain Toolboxes you might be interested in –Database –Statistics –Signal processing –Wavelet The Pupil Toolkit

58 Database toolbox Database toolbox (from Start) –Works w/ all odbc datasources and builds sql queries –Click on generate-m-file under query –Suggests making query with query variables as input variable The Pupil Toolkit

59 Stats toolbox Distribution fitting tool Can have nominal data with nominal class via nominal command – reduces space greatly and lets you select, e.g., by category, and make plots by category. W/ curve fitting (cftool) there’s a goodness of fit metric and can get quantitative measures of fit for hyp testing –And can use custom equation by selecting new fit->type of fit->custom –Fit-options = linear least squares or other as I choose –Can plot confidence interval around curve function –Can output fit params via save-to-workspace Stats  regression  nonlinear  mixed effects The Pupil Toolkit

60 Pupil dilation outside fMRI Pupil dilation during fMRI proportion of maximum dilation seconds seconds Pupil Dilation as a continuous measure of cognitive load Siegle et al, 2003, Neuroimage

61 eye-tracking colored by pupil Courtesy of Greg Siegle, PhD.

62 To Play or Download these presentations: Go to the Lab Publications link Click on the Biometrics Archives Look under SPR 2011 Workshop Select Presentations or References

63 (select Lab Publications)

64 Download Presentations or select Reference List

65

66

67 For additional information:


Download ppt "Analysis of Pupillary Data Society for Psychophysiological Research Workshop, Boston, September 14, 2011."

Similar presentations


Ads by Google