Presentation is loading. Please wait.

Presentation is loading. Please wait.

Alan Girling University of Birmingham, UK

Similar presentations


Presentation on theme: "Alan Girling University of Birmingham, UK"β€” Presentation transcript:

1 Alan Girling University of Birmingham, UK A.J.Girling@bham.ac.uk
An algorithm to optimise the sampling scheme within the clusters of a stepped-wedge design Alan Girling University of Birmingham, UK Funding support (AG) from the NIHR through: The NIHR Collaborations for Leadership in Applied Health Research and Care for West Midlands (CLAHRC WM). The HiSLAC study (NIHR ref 12/128/07) London November 2018

2 Scope Cross-sectional cluster designs:
with (possibly) time-varying cluster-level effects, and uni-directional switching between treatment regimes (as in Stepped-Wedge) Equal numbers of observations in each cluster Freedom to choose the timing of observations within each cluster *Other constraints are available! *

3 Treatment Effect Estimate: πœƒ = π‘˜,𝑑 π‘Ž π‘˜π‘‘ 𝑦 π‘˜π‘‘
SW4: 20 observations per cluster at each time-point ICC = ; fixed time effects (Hussey & Hughes model) Cell Mean Treatment Effect Estimate: πœƒ = π‘˜,𝑑 π‘Ž π‘˜π‘‘ 𝑦 π‘˜π‘‘ Design Layout No.s of Observations π‘š π‘˜π‘‘ Total (𝑀) Clusters (k) 20 100 Time (t) Coefficients: π‘Ž π‘˜π‘‘ (Γ—100) ο€­7.5 30 17.5 5 ο€­2.5 ο€­15 22.5 10 2.5 ο€­10 ο€­22.5 15 7.5 ο€­5 ο€­17.5 ο€­30 Precision = var πœƒ βˆ’1 ο‚΅ 0.400

4 Proposal: modify the π‘š π‘˜π‘‘ s to make π‘š π‘˜π‘‘ βˆ— ∝ π‘Ž π‘˜π‘‘ within each row
π‘Ž π‘˜π‘‘ (Γ—100) βˆ‘|π‘Ž| ο€­7.5 30 17.5 5 67.5 ο€­2.5 ο€­15 22.5 10 52.5 2.5 ο€­10 ο€­22.5 15 7.5 ο€­5 ο€­17.5 ο€­30 πœƒ = π‘˜,𝑑 π‘Ž π‘˜π‘‘ 𝑦 π‘˜π‘‘ Some observations have greater influence on the estimate than others (unlike many classical designs) ?Layout might be improved by moving observations from low-influence to high-influence cells within the same cluster (For equal influence, need π‘Ž π‘˜π‘‘ π‘š π‘˜π‘‘ to be the same in each cell) Proposal: modify the π‘š π‘˜π‘‘ s to make π‘š π‘˜π‘‘ βˆ— ∝ π‘Ž π‘˜π‘‘ within each row π‘š π‘˜π‘‘ βˆ— =𝑀 π‘Ž π‘˜π‘‘ 𝑠=1 𝑇 π‘Ž π‘˜π‘ 

5 Revised Layout: π‘š π‘˜π‘‘ βˆ— =𝑀 π‘Ž π‘˜π‘‘ 𝑠=1 𝑇 π‘Ž π‘˜π‘  .
Also Update Treatment Estimate No.s of Observations ( π‘š π‘˜π‘‘ βˆ— ) Total (𝑀) Clusters (k) 11.1 44.4 25.9 7.4 100 4.8 28.6 42.9 19.0 Time (t) New Coefficients π‘Ž π‘˜π‘‘ βˆ— (Γ—100) βˆ‘ π‘Ž βˆ— ο€­3.9 30.1 11.6 1.6 51.1 ο€­0.7 ο€­20.4 28.1 8.1 58.0 0.7 ο€­8.1 ο€­28.1 20.4 3.9 ο€­1.6 ο€­11.6 ο€­30.1 Precision ο‚΅ 0.624 Precision has improved from to 0.624 But, π‘Ž π‘˜π‘‘ βˆ— π‘š π‘˜π‘‘ βˆ— β‰ π‘π‘œπ‘›π‘ π‘‘π‘Žπ‘›π‘‘, still, within each row

6 No.s of Observations ( π‘š π‘˜π‘‘ (∞) )
After repeated iteration the process converges (to a β€˜Staircase’ Design) πœƒ = π‘˜,𝑑 π‘Ž π‘˜π‘‘ ∞ 𝑦 π‘˜π‘‘ No.s of Observations ( π‘š π‘˜π‘‘ (∞) ) Total Clusters (k) 100 50 Time (t) π‘Ž π‘˜π‘‘ ∞ Γ—100 Clusters 33.3 ο€­33.3 Time Precision ο‚΅ 0.750 π‘Ž π‘˜π‘‘ ∞ π‘š π‘˜π‘‘ ∞ =π‘π‘œπ‘›π‘ π‘‘π‘Žπ‘›π‘‘ within rows, at least for occupied cells

7 The Algorithm For the current allocation π‘š π‘˜π‘‘ 𝑛 compute the coefficients π‘Ž π‘˜π‘‘ 𝑛 of the best estimate πœƒ. πœƒ 𝑛 = π‘˜,𝑑 π‘Ž π‘˜π‘‘ 𝑛 𝑦 π‘˜π‘‘ Update the allocation to make π‘š π‘˜π‘‘ 𝑛+1 ∝ π‘Ž π‘˜π‘‘ 𝑛 within each cluster using π‘š π‘˜π‘‘ 𝑛+1 =𝑀 π‘Ž π‘˜π‘‘ 𝑛 𝑠=1 𝑇 π‘Ž π‘˜π‘  𝑛 (𝑀 is the total number in each cluster, assumed fixed.) Repeat ad lib.

8 Model 𝑦 π‘˜π‘‘π‘– = 𝛽 𝑑 + πœƒ 𝑋 π‘˜π‘‘ + 𝛾 π‘˜π‘‘ + πœ€ π‘˜π‘‘π‘–
… for Observation 𝑖 in Cluster π‘˜ at time 𝑑. 𝑦 π‘˜π‘‘π‘– = 𝛽 𝑑 + πœƒ 𝑋 π‘˜π‘‘ 𝛾 π‘˜π‘‘ πœ€ π‘˜π‘‘π‘– Time Treatment Cluster x Time Residual var 𝛾 π‘˜π‘‘ = 𝜏 2 , var πœ€ π‘˜π‘‘π‘– = 𝜎 2 , corr 𝛾 π‘˜π‘  , 𝛾 π‘˜π‘‘ = Ξ“ 𝑠𝑑 (Hussey & Hughes) Ξ“ 𝑠𝑑 ≑1 (Exchangeable) Ξ“ 𝑠𝑑 =πœ‹+ 1βˆ’πœ‹ 𝛿 𝑠𝑑 (Exponential) Ξ“ 𝑠𝑑 = π‘Ÿ π‘ βˆ’π‘‘ Fixed Effects Random Effects πœƒ = π‘˜,𝑑 π‘Ž π‘˜π‘‘ 𝑦 π‘˜π‘‘ is the weighted least squares estimator (ο‚Ί BLUE)

9 Properties of the Algorithm
Improvement happens at every step: I.e. var πœƒ 𝑛+1 ≀ var πœƒ 𝑛 with equality only if π‘š π‘˜π‘‘ 𝑛 ∝ π‘Ž π‘˜π‘‘ 𝑛 within each cluster. Convergence to a stable point is guaranteed This is usually the optimal allocation Any stable point is a β€˜best’ allocation among all allocations with that support (i.e. collection of non-zero cells) But, if an empty cell appears at any step i.e. π‘Ž π‘˜π‘‘ 𝑛 =0 that cell remains empty at every subsequent step. (In principle the best allocation could be missed.) On the other hand, this property allows us to obtain improved/optimal designs in situations where sampling in some cells is prohibited Behaviour depends on 𝜎 2 , 𝜏 2 and M only through 𝑅= 𝑀 𝜏 2 𝑀 𝜏 2 + 𝜎 2 𝑅 is related to the Cluster-Mean Correlation 𝐢𝑀𝐢 1βˆ’πΆπ‘€πΆ = 1 β€² Ξ“1 𝑇 2 β‹… 𝑅 1βˆ’π‘…

10 Examples: 1) Hussey & Hughes model
Initial Allocation: Equal % of observations at each time-point. (Row-totals = 100%) 14 When 𝑅< 1 2 solution is NOT Unique 𝑅=0.25 Unique solution when 𝑅β‰₯ 1 2 , apart from trades between end columns 𝑅=0.75 79 18 3 50 39 9 2 8 42 40 1 17 67 47 53 49 51 Efficiency improves from (initially) to Efficiency improves from (initially) to (Efficiency computed relative to a Cluster Cross-Over design with same number of observations.)

11 Efficiency of Optimised Allocation: Hussey & Hughes Model

12 Examples: 2) Exponential model: r = 0.9
Initial Allocation: Equal % of observations at each time-point. (Row-totals = 100%) 14 Exact General Behaviour unknown 𝑅=0.25 𝑅=0.75 72 3 4 8 14 47 53 49 51 14 73 46 54 49 51 Efficiency improves from (initially) to Efficiency improves from (initially) to (Efficiency relative to an β€œIdeal” Cluster Cross-Over design with same number of observations.)

13 Efficiency of Optimised Allocation: Exponential Model with r = 0.9

14 Example with prohibited cells: β€˜Transition’/ β€˜Washout’ periods (under H&H model)
Initial Allocation: Equal % of observations at each permissible time-point. (Row-totals = 100%) 14 𝑅=0.25 𝑅=0.75 78 16 5 72 25 2 50 44 6 12 67 9 8 13 4 50 Efficiency improves from (initially) to Efficiency improves from (initially) to

15 Why it works For any linear estimate πœƒ =βˆ‘ π‘Ž π‘˜π‘‘ 𝑦 π‘˜π‘‘ , var πœƒ =𝑉(π‘Ž,π‘š)= 𝜏 2 π‘˜=1 𝐾 π‘Ž π‘˜ β€² Ξ“ π‘Ž π‘˜ + 𝜎 2 π‘˜=1 𝐾 𝑑=1 𝑇 π‘Ž π‘˜π‘‘ 2 π‘š π‘˜π‘‘ It is always true that 𝑑=1 𝑇 π‘Ž π‘˜π‘‘ 2 π‘š π‘˜π‘‘ βˆ— ≀ 𝑑=1 𝑇 π‘Ž π‘˜π‘‘ 2 π‘š π‘˜π‘‘ where π‘š π‘˜π‘‘ βˆ— =𝑀 π‘Ž π‘˜π‘‘ 𝑠=1 𝑇 π‘Ž π‘˜π‘  So the variance of the estimate is reduced by the reallocation of observations. Now apply this argument to the BLUE of πœƒ under allocation π‘š π‘˜π‘‘ . It follows that the BLUE of πœƒ under allocation ( π‘š π‘˜π‘‘ βˆ— ) has smaller variance than the BLUE under π‘š π‘˜π‘‘ .

16 Spin-off: An Objective Function
The best allocation corresponds to a stable point of the algorithm. At any stable point (i.e. where π‘š π‘˜π‘‘ ∝| π‘Ž π‘˜π‘‘ |within each cluster): 𝑉 π‘Ž,π‘š ∝Ψ π‘Ž =𝑅 π‘˜=1 𝐾 π‘Ž π‘˜ β€² Ξ“ π‘Ž π‘˜ +(1βˆ’π‘…) π‘˜=1 𝐾 𝑑=1 𝑇 π‘Ž π‘˜π‘‘ 2 Any optimal design corresponds to a (constrained) minimum value of Ξ¨. Ξ¨ π‘Ž = min π‘Ž Ξ¨(π‘Ž) (subject to unbiasedness constraints on the π‘Ž π‘˜π‘‘ s, and π‘Ž π‘˜π‘‘ =0 in any prohibited cells) …with cell numbers given by: π‘š π‘˜π‘‘ =𝑀 π‘Ž π‘˜π‘‘ 𝑠=1 𝑇 π‘Ž π‘˜π‘  Ξ¨ π‘Ž is not a smooth function, but it is convex.

17 Potential for Exact results using Ξ¨ π‘Ž
Eg. For the Hussey and Hughes Model (Ξ“ 𝑠𝑑 ≑1) Ξ¨ π‘Ž =𝑅 π‘˜=1 𝐾 𝑑=1 𝑇 π‘Ž π‘˜π‘‘ βˆ’π‘… π‘˜=1 𝐾 𝑑=1 𝑇 π‘Ž π‘˜π‘‘ 2

18 (Exact) Optimal Design under HH: Rβ‰₯ 1 2
The matrix of π‘Ž π‘˜π‘‘ s has an β€œAnchored Staircase” form: ο€­q0/2 q1 ο€­q1 q2 ο€­q2 q3 ο€­q3 q4 ο€­q4 q5 +q6/2 ο€­q5 π‘ž π‘˜ ∝ coth πœ™ 2 βˆ™ sinh πΎπœ™ 2 βˆ’ cosh π‘˜βˆ’ 𝐾 2 πœ™ cosh πœ™= 2π‘…βˆ’1 βˆ’1 E.g. 𝑅=0.75; Efficiency β‰ˆ0.76 Efficiency=1βˆ’ 1 𝐾 2βˆ’ tanh πœ™ 2 tanh πΎπœ™ 2 =1βˆ’ 1 𝐾 2βˆ’ 1βˆ’π‘… 𝑅 +𝑂 1 𝐾 2 17 67 47 53 49 51 (Rel. to CXO)

19 Optimal Design under HH: R< 1 2
One possible matrix of π‘Ž π‘˜π‘‘ s is: x+y y ο€­x x ο€­y ο€­xο€­y π‘₯= πΎβˆ’2𝑅 βˆ’1 , 𝑦= 1 2 βˆ’π‘… β‹… πΎβˆ’2𝑅 βˆ’1 Eg. 𝑅=0.25 Efficiency = β‰ˆ0.92 Efficiency=1βˆ’ 2𝑅 𝐾 83 17 50

20 An alternative solution was given earlier:
𝑅=0.25; Efficiency = 0.92 79 18 3 50 39 9 2 8 42 40 1 …and there are many others.

21 Summary Flexible approach to improving design, often leading to substantial improvements in precision Works for sparse layouts and designs with prohibited cells Where the solution is a staircase-type design the experiment may take longer Partly this is a consequence of improved precision A fair comparison is between designs with the same precision (i.e. SW vs an optimised design with fewer total observations). The objective function Ξ¨ provides an alternative approach via convex optimisation methods, and a tool for finding exact results

22 Further developments Optimal allocation of clusters to (optimised) sequences Readily accomplished by adding an extra computation to the algorithm Little advantage for precision, it seems, but there may be scope for alternative near-optimal designs Alternative constraints Fixed total size of study Constraints over specific time-periods Unequal clusters Explore optimal designs with prohibited cells Eg. the Washout example Or to seek more compact designs


Download ppt "Alan Girling University of Birmingham, UK"

Similar presentations


Ads by Google