tapas
tapas copied to clipboard
Design features and parameter recoverability in HGF
Dear Chris & Team,
I am designing a new paradigm and I would like test how HFG parameters account for differences in response outcomes across experimental conditions, as well as potentially look into individual differences down the track. My goal is to ensure the paradigm I am designing, and the learning sequence I am presenting, result in recoverable parameters.
I noticed that there are striking differences across the literature in the number of trials starting from 50 (Siegel et al., 2018, Nat Hum Behav) over 200 (Diaconescu et al., 2014, Plos Comp Bio) and up to 300 and 600 in your introductory HGF papers.
I am wondering if you have any broad recommendations for achieving greater design efficiency (i.e. greater recoverability precision) with fewer trials?
-
For instance, what design features allowed Siegel et al. (2018) to reliably capture effects (effect sizes ranging from r=.35-.62) on $\omega_2$ with only 50 trials?
-
Do some models (e.g. binary outcome models) require more trials as compared to others (e.g. continuous outcome models)?
-
Does the number of levels of HGF also matter (e.g. models with less parameters should be more easily recoverable)?
-
Do some features of sequences (e.g. outcome probability schedules for binary outcomes) drive efficiency?
I've been doing simulations using tapas_hgf_binary and tapas_unitsq_sgm models for a while now (generating data and then fitting the data using default options for priors) and I am struggling to identify feasible sequences of binary outcomes (up to 200 trials) that give me sufficiently precise recovery to detect the kinds of effects reported in Siegel et al. (2018). Any advice on how to continue my search would be very helpful! Thanks in advance!
Kind regards, Milan
-- Milan Andrejevic Cognition and Philosophy Lab Monash University