This page explains the details of estimating weights from SuperLearner-based propensity scores by setting method = "super" in the call to weightit() or weightitMSM(). This method can be used with binary, multinomial, and continuous treatments.

In general, this method relies on estimating propensity scores using the SuperLearner algorithm for stacking predictions and then converting those propensity scores into weights using a formula that depends on the desired estimand. For binary and multinomial treatments, one or more binary classification algorithms are used to estimate the propensity scores as the predicted probability of being in each treatment given the covariates. For continuous treatments, regression algorithms are used to estimate generalized propensity scores as the conditional density of treatment given the covariates. This method relies on SuperLearner::SuperLearner() from the SuperLearner package.

Binary Treatments

For binary treatments, this method estimates the propensity scores using SuperLearner::SuperLearner(). The following estimands are allowed: ATE, ATT, ATC, ATO, ATM, and ATOS. Weights can also be computed using marginal mean weighting through stratification for the ATE, ATT, and ATC. See get_w_from_ps() for details.

Multinomial Treatments

For multinomial treatments, the propensity scores are estimated using several calls to SuperLearner::SuperLearner(), one for each treatment group; the treatment probabilities are not normalized to sum to 1. The following estimands are allowed: ATE, ATT, ATC, ATO, and ATM. The weights for each estimand are computed using the standard formulas or those mentioned above. Weights can also be computed using marginal mean weighting through stratification for the ATE, ATT, and ATC. See get_w_from_ps() for details.

Continuous Treatments

For continuous treatments, the generalized propensity score is estimated using SuperLearner::SuperLearner(). In addition, kernel density estimation can be used instead of assuming a normal density for the numerator and denominator of the generalized propensity score by setting use.kernel = TRUE. Other arguments to density() can be specified to refine the density estimation parameters. plot = TRUE can be specified to plot the density for the numerator and denominator, which can be helpful in diagnosing extreme weights.

Longitudinal Treatments

For longitudinal treatments, the weights are the product of the weights estimated at each time point.

Sampling Weights

Sampling weights are supported through s.weights in all scenarios.

Missing Data

In the presence of missing data, the following value(s) for missing are allowed:

"ind" (default)

First, for each variable with missingness, a new missingness indicator variable is created which takes the value 1 if the original covariate is NA and 0 otherwise. The missingness indicators are added to the model formula as main effects. The missing values in the covariates are then replaced with 0s. The weight estimation then proceeds with this new formula and set of covariates. The covariates output in the resulting weightit object will be the original covariates with the NAs.

Additional Arguments

discrete

if TRUE, uses discrete SuperLearner, which simply selects the best performing method. Default FALSE, which finds the optimal combination of predictions for the libraries using SL.method.

An argument to SL.library must be supplied. To see a list of available entries, use SuperLearner::listWrappers().

All arguments to SuperLearner::SuperLearner() can be passed through weightit() or weightitMSM(), with the following exceptions:

  • obsWeights is ignored because sampling weights are passed using s.weights.

  • method in SuperLearner() is replaced with the argument SL.method in weightit().

For continuous treatments only, the following arguments may be supplied:

density

A function corresponding to the conditional density of the treatment. The standardized residuals of the treatment model will be fed through this function to produce the numerator and denominator of the generalized propensity score weights. If blank, dnorm() is used as recommended by Robins et al. (2000). This can also be supplied as a string containing the name of the function to be called. If the string contains underscores, the call will be split by the underscores and the latter splits will be supplied as arguments to the second argument and beyond. For example, if density = "dt_2" is specified, the density used will be that of a t-distribution with 2 degrees of freedom. Using a t-distribution can be useful when extreme outcome values are observed (Naimi et al., 2014). Ignored if use.kernel = TRUE (described below).

use.kernel

If TRUE, uses kernel density estimation through the density() function to estimate the numerator and denominator densities for the weights. If FALSE, the argument to the density parameter is used instead.

bw, adjust, kernel, n

If use.kernel = TRUE, the arguments to the density() function. The defaults are the same as those in density except that n is 10 times the number of units in the sample.

plot

If use.kernel = TRUE, whether to plot the estimated density.

Balance SuperLearner

In addition to the methods allowed by SuperLearner(), one can specify SL.method = "method.balance" to use "Balance SuperLearner" as described by Pirracchio and Carone (2018), wherein covariate balance is used to choose the optimal combination of the predictions from the methods specified with SL.library. Coefficients are chosen (one for each prediction method) so that the weights generated from the weighted combination of the predictions optimize a balance criterion, which must be set with the stop.method argument, described below.

stop.method

A string describing the balance criterion used to select the best weights. See stop.method for allowable options for each treatment type. For binary and multinomial treatments, the default is "es.mean", which minimizes the average absolute standard mean difference among the covariates between treatment groups. For continuous treatments, the default is "p.mean", which minimizes the average absolute Pearson correlation between the treatment and covariates.

Note that this implementation differs from that of Pirracchio and Carone (2018) in that here, balance is measured only on the terms included in the model formula (i.e., and not their interactions unless specifically included), and balance results from a sample weighted using the estimated predicted values as propensity scores, not a sample matched using propensity score matching on the predicted values. Binary and continuous treatments are supported, but currently multinomial treatments are not.

Additional Outputs

info

For binary and continuous treatments, a list with two entries, coef and cvRisk. For multinomial treatments, a list of lists with these two entries, one for each treatment level.

coef

The coefficients in the linear combination of the predictions from each method in SL.library. Higher values indicate that the corresponding method plays a larger role in determining the resulting predicted value, and values close to zero indicate that the method plays little role in determining the predicted value. When discrete = TRUE, these correspond to the coefficients that would have been estimated had discrete been FALSE.

cvRisk

The cross-validation risk for each method in SL.library. Higher values indicate that the method has worse cross-validation accuracy. When SL.method = "method.balance", the sample weighted balance statistic requested with stop.method. Higher values indicate worse balance.

obj

When include.obj = TRUE, the SuperLearner fit(s) used to generate the predicted values. For binary and continuous treatments, the output of the call to SuperLearner::SuperLearner(). For multinomial treatments, a list of outputs to calls to SuperLearner::SuperLearner().

Details

SuperLearner works by fitting several machine learning models to the treatment and covariates and then taking a weighted combination of the generated predicted values to use as the propensity scores, which are then used to construct weights. The machine learning models used are supplied using the SL.library argument; the more models are supplied, the higher the chance of correctly modeling the propensity score. The predicted values are combined using the method supplied in the SL.method argument (which is nonnegative least squares by default). A benefit of SuperLearner is that, asymptotically, it is guaranteed to perform as well as or better than the best-performing method included in the library. Using Balance SuperLearner by setting SL.method = "method.balance" works by selecting the combination of predicted values that minimizes an imbalance measure.

References

Binary treatments

Pirracchio, R., Petersen, M. L., & van der Laan, M. (2015). Improving Propensity Score Estimators’ Robustness to Model Misspecification Using Super Learner. American Journal of Epidemiology, 181(2), 108–119. doi:10.1093/aje/kwu253

Continuous treatments

Kreif, N., Grieve, R., Díaz, I., & Harrison, D. (2015). Evaluation of the Effect of a Continuous Treatment: A Machine Learning Approach with an Application to Treatment for Traumatic Brain Injury. Health Economics, 24(9), 1213–1228. doi:10.1002/hec.3189

- Balance SuperLearner (SL.method = "method.balance")

Pirracchio, R., & Carone, M. (2018). The Balance Super Learner: A robust adaptation of the Super Learner to improve estimation of the average treatment effect in the treated based on propensity score matching. Statistical Methods in Medical Research, 27(8), 2504–2518. doi:10.1177/0962280216682055

See method_ps for additional references.

See also

weightit(), weightitMSM(), get_w_from_ps()

stop.method for allowable arguments to stop.method when using SL.method = "method.balance"

Note

Some methods formerly available in SuperLearner are now in SuperLearnerExtra, which can be found on GitHub at https://github.com/ecpolley/SuperLearnerExtra.

Examples

library("cobalt")
data("lalonde", package = "cobalt")

#Balancing covariates between treatment groups (binary)
(W1 <- weightit(treat ~ age + educ + married +
                  nodegree + re74, data = lalonde,
                method = "super", estimand = "ATT",
                SL.library = c("SL.glm", "SL.stepAIC",
                               "SL.glm.interaction")))
#> Loading required package: nnls
#> A weightit object
#>  - method: "super" (propensity score weighting with SuperLearner)
#>  - number of obs.: 614
#>  - sampling weights: none
#>  - treatment: 2-category
#>  - estimand: ATT (focal: 1)
#>  - covariates: age, educ, married, nodegree, re74
summary(W1)
#>                  Summary of weights
#> 
#> - Weight ranges:
#> 
#>            Min                                  Max
#> treated 1.0000      ||                       1.0000
#> control 0.0062 |---------------------------| 5.6573
#> 
#> - Units with 5 most extreme weights by group:
#>                                            
#>               7      6      5      4      2
#>  treated      1      1      1      1      1
#>             411    589    269    409    296
#>  control 2.4116 2.5004 2.7322 3.3402 5.6573
#> 
#> - Weight statistics:
#> 
#>         Coef of Var   MAD Entropy # Zeros
#> treated       0.000 0.000  -0.000       0
#> control       1.116 0.738   0.442       0
#> 
#> - Effective Sample Sizes:
#> 
#>            Control Treated
#> Unweighted  429.       185
#> Weighted    191.27     185
bal.tab(W1)
#> Call
#>  weightit(formula = treat ~ age + educ + married + nodegree + 
#>     re74, data = lalonde, method = "super", estimand = "ATT", 
#>     SL.library = c("SL.glm", "SL.stepAIC", "SL.glm.interaction"))
#> 
#> Balance Measures
#>                Type Diff.Adj
#> prop.score Distance   0.0599
#> age         Contin.  -0.1408
#> educ        Contin.   0.0167
#> married      Binary  -0.0049
#> nodegree     Binary   0.0123
#> re74        Contin.  -0.0251
#> 
#> Effective sample sizes
#>            Control Treated
#> Unadjusted  429.       185
#> Adjusted    191.27     185
# \donttest{
#Balancing covariates with respect to race (multinomial)
(W2 <- weightit(race ~ age + educ + married +
                  nodegree + re74, data = lalonde,
                method = "super", estimand = "ATE",
                SL.library = c("SL.glm", "SL.stepAIC",
                               "SL.glm.interaction")))
#> A weightit object
#>  - method: "super" (propensity score weighting with SuperLearner)
#>  - number of obs.: 614
#>  - sampling weights: none
#>  - treatment: 3-category (black, hispan, white)
#>  - estimand: ATE
#>  - covariates: age, educ, married, nodegree, re74
summary(W2)
#>                  Summary of weights
#> 
#> - Weight ranges:
#> 
#>           Min                                   Max
#> black  1.3125 |-------------------|         14.0815
#> hispan 1.7509  |--------------------------| 19.0788
#> white  1.0815 |----|                         4.9344
#> 
#> - Units with 5 most extreme weights by group:
#>                                                
#>             184     244     485     182     181
#>   black  7.5917  7.7461 10.1133 11.6337 14.0815
#>             346     392     371     269     345
#>  hispan 16.6957 17.1862 17.2636 17.8517 19.0788
#>             457      23     409     589     296
#>   white  4.0945  4.1846  4.3505  4.6072  4.9344
#> 
#> - Weight statistics:
#> 
#>        Coef of Var   MAD Entropy # Zeros
#> black        0.640 0.391   0.138       0
#> hispan       0.476 0.372   0.110       0
#> white        0.388 0.317   0.069       0
#> 
#> - Effective Sample Sizes:
#> 
#>             black hispan  white
#> Unweighted 243.    72.   299.  
#> Weighted   172.57  58.85 260.06
bal.tab(W2)
#> Call
#>  weightit(formula = race ~ age + educ + married + nodegree + re74, 
#>     data = lalonde, method = "super", estimand = "ATE", SL.library = c("SL.glm", 
#>         "SL.stepAIC", "SL.glm.interaction"))
#> 
#> Balance summary across all treatment pairs
#>             Type Max.Diff.Adj
#> age      Contin.       0.0968
#> educ     Contin.       0.0794
#> married   Binary       0.0450
#> nodegree  Binary       0.0246
#> re74     Contin.       0.0328
#> 
#> Effective sample sizes
#>             black hispan  white
#> Unadjusted 243.    72.   299.  
#> Adjusted   172.57  58.85 260.06

#Balancing covariates with respect to re75 (continuous)
#assuming t(8) conditional density for treatment
(W3 <- weightit(re75 ~ age + educ + married +
                  nodegree + re74, data = lalonde,
                method = "super", density = "dt_8",
                SL.library = c("SL.glm", "SL.ridge",
                               "SL.glm.interaction")))
#> A weightit object
#>  - method: "super" (propensity score weighting with SuperLearner)
#>  - number of obs.: 614
#>  - sampling weights: none
#>  - treatment: continuous
#>  - covariates: age, educ, married, nodegree, re74
summary(W3)
#>                  Summary of weights
#> 
#> - Weight ranges:
#> 
#>        Min                                   Max
#> all 0.0439 |---------------------------| 21.0528
#> 
#> - Units with 5 most extreme weights by group:
#>                                           
#>         431    483     484     485     354
#>  all 9.4162 16.527 18.1028 19.0692 21.0528
#> 
#> - Weight statistics:
#> 
#>     Coef of Var   MAD Entropy # Zeros
#> all       1.334 0.504   0.341       0
#> 
#> - Effective Sample Sizes:
#> 
#>             Total
#> Unweighted 614.  
#> Weighted   221.04
bal.tab(W3)
#> Call
#>  weightit(formula = re75 ~ age + educ + married + nodegree + re74, 
#>     data = lalonde, method = "super", density = "dt_8", SL.library = c("SL.glm", 
#>         "SL.ridge", "SL.glm.interaction"))
#> 
#> Balance Measures
#>             Type Corr.Adj
#> age      Contin.   0.0311
#> educ     Contin.   0.0348
#> married   Binary   0.0613
#> nodegree  Binary  -0.0599
#> re74     Contin.   0.0405
#> 
#> Effective sample sizes
#>             Total
#> Unadjusted 614.  
#> Adjusted   221.04
# }
#Balancing covariates between treatment groups (binary)
# using balance SuperLearner to minimize the average
# KS statistic
(W4 <- weightit(treat ~ age + educ + married +
                  nodegree + re74, data = lalonde,
                method = "super", estimand = "ATT",
                SL.library = c("SL.glm", "SL.stepAIC",
                               "SL.lda"),
                SL.method = "method.balance",
                stop.method = "ks.mean"))
#> A weightit object
#>  - method: "super" (propensity score weighting with SuperLearner)
#>  - number of obs.: 614
#>  - sampling weights: none
#>  - treatment: 2-category
#>  - estimand: ATT (focal: 1)
#>  - covariates: age, educ, married, nodegree, re74
summary(W4)
#>                  Summary of weights
#> 
#> - Weight ranges:
#> 
#>            Min                                  Max
#> treated 1.0000                    ||         1.0000
#> control 0.0354 |---------------------------| 1.4688
#> 
#> - Units with 5 most extreme weights by group:
#>                                            
#>               5      4      3      2      1
#>  treated      1      1      1      1      1
#>             411    595    269    409    296
#>  control 1.1051 1.1654 1.2041 1.2831 1.4688
#> 
#> - Weight statistics:
#> 
#>         Coef of Var   MAD Entropy # Zeros
#> treated       0.000 0.000  -0.000       0
#> control       0.739 0.662   0.278       0
#> 
#> - Effective Sample Sizes:
#> 
#>            Control Treated
#> Unweighted  429.       185
#> Weighted    277.67     185
bal.tab(W4)
#> Call
#>  weightit(formula = treat ~ age + educ + married + nodegree + 
#>     re74, data = lalonde, method = "super", estimand = "ATT", 
#>     SL.library = c("SL.glm", "SL.stepAIC", "SL.lda"), SL.method = "method.balance", 
#>     stop.method = "ks.mean")
#> 
#> Balance Measures
#>                Type Diff.Adj
#> prop.score Distance   0.0856
#> age         Contin.   0.0855
#> educ        Contin.  -0.0174
#> married      Binary  -0.0045
#> nodegree     Binary   0.0291
#> re74        Contin.  -0.0760
#> 
#> Effective sample sizes
#>            Control Treated
#> Unadjusted  429.       185
#> Adjusted    277.67     185