Example demonstrating “cross validated training frames” (or “cross frames”) in vtreat.

Consider the following data frame. The outcome only depends on the “good” variables, not on the (high degree of freedom) “bad” variables. Modeling such a data set runs a high risk of over-fit.

set.seed(22626)

mkData <- function(n) {
  d <- data.frame(xBad1=sample(paste('level',1:1000,sep=''),n,replace=TRUE),
                  xBad2=sample(paste('level',1:1000,sep=''),n,replace=TRUE),
                  xBad3=sample(paste('level',1:1000,sep=''),n,replace=TRUE),
                  xGood1=rnorm(n),
                  xGood2=rnorm(n))
  
  # outcome only depends on "good" variables
  d$y <- rnorm(nrow(d))+0.2*d$xGood1 + 0.3*d$xGood2>0.5
  # the random group used for splitting the data set, not a variable.
  d$rgroup <- sample(c("cal","train","test"),nrow(d),replace=TRUE)  
  d
}

d <- mkData(2000)

# devtools::install_github("WinVector/WVPlots")
# library('WVPlots')
plotRes <- function(d,predName,yName,title) {
  print(title)
  tab <- table(truth=d[[yName]],pred=d[[predName]]>0.5)
  print(tab)
  diag <- sum(vapply(seq_len(min(dim(tab))),
                     function(i) tab[i,i],numeric(1)))
  acc <- diag/sum(tab)
#  if(requireNamespace("WVPlots",quietly=TRUE)) {
#     print(WVPlots::ROCPlot(d,predName,yName,title))
#  }
  print(paste('accuracy',acc))
}

The Wrong Way

Bad practice: use the same set of data to prepare variable encoding and train a model.

dTrain <- d[d$rgroup!='test',,drop=FALSE]
dTest <- d[d$rgroup=='test',,drop=FALSE]
treatments <- vtreat::designTreatmentsC(dTrain,c('xBad1','xBad2','xBad3','xGood1','xGood2'),
                                        'y',TRUE,
  rareCount=0 # Note: usually want rareCount>0, setting to zero to illustrate problem
)
## [1] "vtreat 1.3.2 inspecting inputs Mon Nov  5 08:06:58 2018"
## [1] "designing treatments Mon Nov  5 08:06:58 2018"
## [1] " have initial level statistics Mon Nov  5 08:06:58 2018"
## [1] " scoring treatments Mon Nov  5 08:06:58 2018"
## [1] "have treatment plan Mon Nov  5 08:06:58 2018"
## [1] "rescoring complex variables Mon Nov  5 08:06:58 2018"
## [1] "done rescoring complex variables Mon Nov  5 08:06:58 2018"
dTrainTreated <- vtreat::prepare(treatments,dTrain,
  pruneSig=c() # Note: usually want pruneSig to be a small fraction, setting to null to illustrate problems
)
m1 <- glm(y~xBad1_catB + xBad2_catB + xBad3_catB + xGood1_clean + xGood2_clean,
          data=dTrainTreated,family=binomial(link='logit'))
## Warning: glm.fit: fitted probabilities numerically 0 or 1 occurred
print(summary(m1))  # notice low residual deviance
## 
## Call:
## glm(formula = y ~ xBad1_catB + xBad2_catB + xBad3_catB + xGood1_clean + 
##     xGood2_clean, family = binomial(link = "logit"), data = dTrainTreated)
## 
## Deviance Residuals: 
##      Min        1Q    Median        3Q       Max  
## -2.32190  -0.00014   0.00000   0.00001   2.32399  
## 
## Coefficients:
##              Estimate Std. Error z value Pr(>|z|)    
## (Intercept)   -0.5794     0.3284  -1.764 0.077698 .  
## xBad1_catB     1.0987     0.3627   3.029 0.002454 ** 
## xBad2_catB     0.9302     0.3058   3.042 0.002349 ** 
## xBad3_catB     1.5057     0.4468   3.370 0.000752 ***
## xGood1_clean   0.8404     0.2619   3.209 0.001334 ** 
## xGood2_clean   0.8254     0.2854   2.892 0.003823 ** 
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## (Dispersion parameter for binomial family taken to be 1)
## 
##     Null deviance: 1724.55  on 1331  degrees of freedom
## Residual deviance:  114.93  on 1326  degrees of freedom
## AIC: 126.93
## 
## Number of Fisher Scoring iterations: 12
dTrain$predM1 <- predict(m1,newdata=dTrainTreated,type='response')
plotRes(dTrain,'predM1','y','model1 on train')
## [1] "model1 on train"
##        pred
## truth   FALSE TRUE
##   FALSE   850   16
##   TRUE      7  459
## [1] "accuracy 0.982732732732733"
dTestTreated <- vtreat::prepare(treatments,dTest,pruneSig=c())
dTest$predM1 <- predict(m1,newdata=dTestTreated,type='response')
plotRes(dTest,'predM1','y','model1 on test')
## [1] "model1 on test"
##        pred
## truth   FALSE TRUE
##   FALSE   316  158
##   TRUE    134   60
## [1] "accuracy 0.562874251497006"

Notice above that we see a training accuracy of 98% and a test accuracy of 60%. Also notice the downstream model (the glm) erroneously thinks the xBad?_cat variables are significant (due to the large number of degrees of freedom hidden from the downstream model by the impact/effect coding).

The Right Way: A Calibration Set

Now try a proper calibration/train/test split:

dCal <- d[d$rgroup=='cal',,drop=FALSE]
dTrain <- d[d$rgroup=='train',,drop=FALSE]
dTest <- d[d$rgroup=='test',,drop=FALSE]

# a nice heuristic, 
# expect only a constant number of noise variables to sneak past
pruneSig <- 1/ncol(dTrain) 
treatments <- vtreat::designTreatmentsC(dCal,
                                        c('xBad1','xBad2','xBad3','xGood1','xGood2'),
                                        'y',TRUE,
  rareCount=0 # Note: usually want rareCount>0, setting to zero to illustrate problem
)
## [1] "vtreat 1.3.2 inspecting inputs Mon Nov  5 08:06:58 2018"
## [1] "designing treatments Mon Nov  5 08:06:58 2018"
## [1] " have initial level statistics Mon Nov  5 08:06:58 2018"
## [1] " scoring treatments Mon Nov  5 08:06:58 2018"
## [1] "have treatment plan Mon Nov  5 08:06:58 2018"
## [1] "rescoring complex variables Mon Nov  5 08:06:58 2018"
## [1] "done rescoring complex variables Mon Nov  5 08:06:59 2018"
dTrainTreated <- vtreat::prepare(treatments,dTrain,
  pruneSig=pruneSig)
newvars <- setdiff(colnames(dTrainTreated),'y')
m1 <- glm(paste('y',paste(newvars,collapse=' + '),sep=' ~ '),
          data=dTrainTreated,family=binomial(link='logit'))
print(summary(m1))  
## 
## Call:
## glm(formula = paste("y", paste(newvars, collapse = " + "), sep = " ~ "), 
##     family = binomial(link = "logit"), data = dTrainTreated)
## 
## Deviance Residuals: 
##     Min       1Q   Median       3Q      Max  
## -1.5225  -0.9198  -0.6951   1.1703   2.2995  
## 
## Coefficients:
##              Estimate Std. Error z value Pr(>|z|)    
## (Intercept)  -0.69527    0.08873  -7.836 4.65e-15 ***
## xGood1_clean  0.39514    0.08537   4.629 3.68e-06 ***
## xGood2_clean  0.55134    0.09580   5.755 8.66e-09 ***
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## (Dispersion parameter for binomial family taken to be 1)
## 
##     Null deviance: 832.55  on 642  degrees of freedom
## Residual deviance: 771.92  on 640  degrees of freedom
## AIC: 777.92
## 
## Number of Fisher Scoring iterations: 4
dTrain$predM1 <- predict(m1,newdata=dTrainTreated,type='response')
plotRes(dTrain,'predM1','y','model1 on train')
## [1] "model1 on train"
##        pred
## truth   FALSE TRUE
##   FALSE   377   41
##   TRUE    160   65
## [1] "accuracy 0.687402799377916"
dTestTreated <- vtreat::prepare(treatments,dTest,
                                pruneSig=pruneSig)
dTest$predM1 <- predict(m1,newdata=dTestTreated,type='response')
plotRes(dTest,'predM1','y','model1 on test')
## [1] "model1 on test"
##        pred
## truth   FALSE TRUE
##   FALSE   425   49
##   TRUE    150   44
## [1] "accuracy 0.702095808383233"

Notice above that we now see training and test accuracies of 70%. We have defeated over-fit in two ways: training performance is closer to test performance, and test performance is better. Also we see that the model now properly considers the “bad” variables to be insignificant.

Another Right Way: Cross-Validation

Below is a more statistically efficient practice: building a cross training frame.

The intuition

Consider any trained statistical model (in this case our treatment plan and variable selection plan) as a two-argument function f(A,B). The first argument is the training data and the second argument is the application data. In our case f(A,B) is: designTreatmentsC(A) %>% prepare(B), and it produces a treated data frame.

When we use the same data in both places to build our training frame, as in

TrainTreated = f(TrainData,TrainData),

we are not doing a good job simulating the future application of f(,), which will be f(TrainData,FutureData).

To improve the quality of our simulation we can call

TrainTreated = f(CalibrationData,TrainData)

where CalibrationData and TrainData are disjoint datasets (as we did in the earlier example) and expect this to be a good imitation of future f(CalibrationData,FutureData).

Cross-Validation and vtreat: The cross-frame.

Another approach is to build a “cross validated” version of f. We split TrainData into a list of 3 disjoint row intervals: Train1,Train2,Train3. Instead of computing f(TrainData,TrainData) compute:

TrainTreated = f(Train2+Train3,Train1) + f(Train1+Train3,Train2) + f(Train1+Train2,Train3)

(where + denotes rbind()).

The idea is this looks a lot like f(TrainData,TrainData) except it has the important property that no row in the right-hand side is ever worked on by a model built using that row (a key characteristic that future data will have) so we have a good imitation of f(TrainData,FutureData).

In other words: we use cross validation to simulate future data. The main thing we are doing differently is remembering that we can apply cross validation to any two argument function f(A,B) and not only to functions of the form f(A,B) = buildModel(A) %>% scoreData(B). We can use this formulation in stacking or super-learning with f(A,B) of the form buildSubModels(A) %>% combineModels(B) (to produce a stacked or ensemble model); the idea applies to improving ensemble methods in general.

See:

  • “General oracle inequalities for model selection” Charles Mitchell and Sara van de Geer
  • “On Cross-Validation and Stacking: Building seemingly predictive models on random data” Claudia Perlich and Grzegorz Swirszcz
  • “Super Learner” Mark J. van der Laan, Eric C. Polley, and Alan E. Hubbard

In fact you can think of vtreat as a super-learner.

In super learning cross validation techniques are used to simulate having built sub-model predictions on novel data. The simulated out of sample-applications of these sub models (and not the sub models themselves) are then used as input data for the next stage learner. In future application the actual sub-models are applied and their immediate outputs is used by the super model.

In vtreat the sub-models are single variable treatments and the outer model construction is left to the practitioner (using the cross-frames for simulation and not the treatmentplan). In application the treatment plan is used.

Example

Below is the example cross-run. The function mkCrossFrameCExperiment returns a treatment plan for use in preparing future data, and a cross-frame for use in fitting a model.

dTrain <- d[d$rgroup!='test',,drop=FALSE]
dTest <- d[d$rgroup=='test',,drop=FALSE]
prep <- vtreat::mkCrossFrameCExperiment(dTrain,
           c('xBad1','xBad2','xBad3','xGood1','xGood2'),
           'y',TRUE,
           rareCount=0 # Note: usually want rareCount>0, setting to zero to illustrate problems
)
## [1] "vtreat 1.3.2 start initial treatment design Mon Nov  5 08:06:59 2018"
## [1] " start cross frame work Mon Nov  5 08:06:59 2018"
## [1] " vtreat::mkCrossFrameCExperiment done Mon Nov  5 08:07:00 2018"
treatments <- prep$treatments

knitr::kable(treatments$scoreFrame[,c('varName','sig')])
varName sig
xBad1_catP 0.8685784
xBad1_catB 0.0942444
xBad2_catP 0.8558471
xBad2_catB 0.1142775
xBad3_catP 0.6981315
xBad3_catB 0.1103321
xGood1_clean 0.0000000
xGood2_clean 0.0000000
colnames(prep$crossFrame)
## [1] "xBad1_catP"   "xBad1_catB"   "xBad2_catP"   "xBad2_catB"  
## [5] "xBad3_catP"   "xBad3_catB"   "xGood1_clean" "xGood2_clean"
## [9] "y"
# vtreat::mkCrossFrameCExperiment doesn't take a pruneSig argument, but we can
# prune on our own.
print(pruneSig)
## [1] 0.1428571
newvars <- treatments$scoreFrame$varName[treatments$scoreFrame$sig<=pruneSig]
# force in bad variables, to show we "belt and suspenders" deal with them
# in that things go well in the cross-frame even if they sneak past pruning
newvars <- sort(union(newvars,c("xBad1_catB","xBad2_catB","xBad3_catB")))
print(newvars)
## [1] "xBad1_catB"   "xBad2_catB"   "xBad3_catB"   "xGood1_clean"
## [5] "xGood2_clean"
dTrainTreated <- prep$crossFrame

We ensured the undesirable xBad*_catB variables back in to demonstrate that even if they sneak past a lose pruneSig, the crossframe lets the downstream model deal with them correctly. To ensure more consistent filtering of the complicated variables one can increase the ncross argument in vtreat::mkCrossFrameCExperiment/vtreat::mkCrossFrameNExperiment.

Now we fit the model to the cross-frame rather than to prepare(treatments, dTrain) (the treated training data).

m1 <- glm(paste('y',paste(newvars,collapse=' + '),sep=' ~ '),
          data=dTrainTreated,family=binomial(link='logit'))
print(summary(m1))  
## 
## Call:
## glm(formula = paste("y", paste(newvars, collapse = " + "), sep = " ~ "), 
##     family = binomial(link = "logit"), data = dTrainTreated)
## 
## Deviance Residuals: 
##     Min       1Q   Median       3Q      Max  
## -1.6624  -0.9170  -0.6663   1.1747   2.2971  
## 
## Coefficients:
##               Estimate Std. Error z value Pr(>|z|)    
## (Intercept)  -0.687112   0.065340 -10.516  < 2e-16 ***
## xBad1_catB    0.007962   0.009466   0.841    0.400    
## xBad2_catB   -0.014104   0.009579  -1.472    0.141    
## xBad3_catB    0.014359   0.009331   1.539    0.124    
## xGood1_clean  0.405918   0.061888   6.559 5.42e-11 ***
## xGood2_clean  0.570827   0.064946   8.789  < 2e-16 ***
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## (Dispersion parameter for binomial family taken to be 1)
## 
##     Null deviance: 1724.6  on 1331  degrees of freedom
## Residual deviance: 1586.6  on 1326  degrees of freedom
## AIC: 1598.6
## 
## Number of Fisher Scoring iterations: 4
dTrain$predM1 <- predict(m1,newdata=dTrainTreated,type='response')
plotRes(dTrain,'predM1','y','model1 on train')
## [1] "model1 on train"
##        pred
## truth   FALSE TRUE
##   FALSE   775   91
##   TRUE    331  135
## [1] "accuracy 0.683183183183183"
dTestTreated <- vtreat::prepare(treatments,dTest,
                                pruneSig=c(),varRestriction=newvars)
knitr::kable(head(dTestTreated))
xBad1_catB xBad2_catB xBad3_catB xGood1_clean xGood2_clean y
0.0000000 0.6196992 -8.590741 0.4217559 0.3143976 TRUE
-8.5907412 0.0000000 -9.283838 -1.6801750 -0.0767822 TRUE
-9.6892868 0.0000000 9.830139 1.0637346 0.8217212 FALSE
9.8301395 -0.0733980 -8.590741 0.2954393 0.3517839 TRUE
-9.2838384 -8.5907412 0.000000 0.9866599 0.3880777 FALSE
0.6196992 9.8301395 9.830139 1.1893923 0.3922303 TRUE
dTest$predM1 <- predict(m1,newdata=dTestTreated,type='response')
plotRes(dTest,'predM1','y','model1 on test')
## [1] "model1 on test"
##        pred
## truth   FALSE TRUE
##   FALSE   421   53
##   TRUE    145   49
## [1] "accuracy 0.703592814371258"

We again get the better 70% test accuracy. And this is a more statistically efficient technique as we didn’t have to restrict some data to calibration.

The model fit to the cross-frame behaves similarly to the model produced via the process f(CalibrationData, TrainData). Notice that the xBad*_catB variables fail to achieve significance in the downstream glm model, allowing that model to give them small coefficients and even (if need be) prune them out. This is the point of using a cross frame as we see in the first example the xBad*_catB are hard to remove if they make it to standard (non-cross) frames as they are hiding a lot of degrees of freedom from downstream modeling procedures.