A preview of new features in bootnet 1.1

This blog post was written by Sacha Epskamp.

In the past year, we have published two tutorial papers on the bootnet package for R, which aims to provide an encompassing framework for estimating network structures and checking their accuracy and stability. The first paper, published in Behavior Research Methods, focuses on how to use the bootstrapping methods to assess the accuracy and stability of network structures. The second, published in Psychological Methods, focuses on the most commonly used network model, the Gaussian graphical model (GGM; a network of partial correlations), and discusses further utility of the bootnet package.

With more than a year of development, version 1.1 of the bootnet package marks the most ambitious series of updates to the package to date. The version is now ready for testing on Github, and will soon be released on CRAN. This version includes or fleshes out a total of 7 new default sets, including one default set aimed at time-series analysis, and offers new functionality to the default sets already included in the package. In addition, the package now fully supports directed networks as well as methods resulting in multiple network models, and supports the expected influence centrality measure. Furthermore, the plotting functions in bootnet have been updated and now show more information. Finally, the package includes a new simulation function, replicationSimulator, and greatly improves the functionality of the netSimulator function. With these extensions, bootnet now aims to provide an exhaustive simulation suite for network models in many conditions. This blog post is intended to introduce this new functionality and to encourage testing of the functionality before the package is submitted to CRAN.

The package can be installed using:

library("devtools")
install_github("sachaepskamp/bootnet")

which requires developer tools to be installed: Rtools for Windows and Xcode for Mac. Subsequently, the package can be loaded as usual:
The package can be installed using:

library("bootnet")

Updates to network estimation

Bootnet now contains the following default sets:

default model Description Packages Recently added
“EBICglasso” GGM Regularized GGM estimation using glasso and EBIC Model selection qgraph, glasso
“pcor” GGM Unregularized GGM estimation, possibly thresholded using significance testing or FDR qgraph
“IsingFit” Ising Regularized Ising models using nodewise logistic regressions and EBIC model selection IsingFit, glmnet
“IsingSampler” Ising Unregularized Ising moddel estimation IsingSampler
“huge” GGM Regularized GGM estimation using glasso and EBIC Model selection huge
“adalasso” GGM Regularized GGM estimation using adaptive LASSO and cross-validation parcor
“mgm” Mixed graphical model Regularized mixed graphical model estimation using EBIC or cross-validation mgm Revamped
“relimp” Relative importance networks Relative importance networks, possible using another default for network structure relaimpo Yes
“cor” Correlation networks Correlation networks, possibly thresholded using significance testing or FDR qgraph Yes
“TMFG” Correlation networks & GGM Triangulated Maximally Filtered Graph NetworkToolbox Yes
“LoGo” GGM Local/Global Sparse Inverse Covariance Matrix NetworkToolbox Yes
“ggmModSelect” GGM Unregularized (E) BIC model selection using glasso and stepwise estimation qgraph, glasso Yes
“graphicalVAR” Graphical VAR Regularized estimation of temporal and contemporaneous networks graphicalVAR Yes

As can be seen, several of these are newly added. For example, we can now estimate the two recently added more conservative GGM estimation methods added to qgraph, which I described in an earlier blog post. Taking my favorite dataset as example:

library("psych")
data(bfi)

The optimal unregularized network that minimizes BIC (note that tuning defaults to 0 in this case) can be obtained as follows:

net_modSelect <- estimateNetwork(bfi[,1:25], 
              default = "ggmModSelect",
              stepwise = FALSE,
              corMethod = "cor")

It is generally recommended to use the stepwise improvement of edges with stepwise = TRUE, but I set it to FALSE here to make this tutorial easier to follow. Likewise, the thresholded regularized GGM can be obtained using the new threshold argument for the EBICglasso default set:

net_thresh <- estimateNetwork(bfi[,1:25],
              tuning = 0, # EBICglasso sets tuning to 0.5 by default
              default = "EBICglasso",
              threshold = TRUE,
              corMethod = "cor")

As typical, we can plot the network:

Layout <- qgraph::averageLayout(net_modSelect, net_thresh)
layout(t(1:2))
plot(net_modSelect, layout = Layout, title = "ggmModSelect")
plot(net_thresh, layout = Layout, title = "Thresholded EBICglasso")

unnamed-chunk-6-1

Principal direction and expected influence

One robust finding in psychological literature is that all variables tend to correlate positively after arbitrary rescaling of variables – the positive manifold. Likewise, it is common to expect parameters focusing on conditional associations (e.g., partial correlations) to also be positive after rescaling variables. Negative edges in such a network could indicate (a) violations of the latent variable model and (b) the presence of common cause effects in the data. To this end, bootnet now includes the ‘principalDirection’ argument in many default sets, which takes the first eigenvector of the correlation matrix and multiplies all variables by their corresponding sign. For example:

net_modSelect_rescale <- estimateNetwork(bfi[,1:25], 
              default = "ggmModSelect",
              stepwise = FALSE,
              principalDirection = TRUE)

net_thresh_rescale <- estimateNetwork(bfi[,1:25],
              tuning = 0, 
              default = "EBICglasso",
              threshold = TRUE,
              principalDirection = TRUE)

layout(t(1:2))
plot(net_modSelect_rescale, layout = Layout, 
     title = "ggmModSelect")
plot(net_thresh_rescale, layout = Layout, 
     title = "Thresholded EBICglasso")

unnamed-chunk-7-1

This makes the edges of an unexpected sign much more profound, at the cost of interpretability (as now the rescaling of some variables has to be taken into account).

One potentially useful centrality index that can be used on graphs mostly showing only positive relationships (and all variables recoded in the same direction) is the newly proposed expected influence measure. Expected influence computes node strength without taking the absolute value of edge-weights. This centrality measure can now be obtained:

qgraph::centralityPlot(
  list(
    ggmModSelect = net_modSelect_rescale,
    EBICGlasso_thresh = net_thresh_rescale
  ), include = "ExpectedInfluence"
)

unnamed-chunk-8-1

To bootstrap expected influence, it has to be requested from bootnet():

boots <- bootnet(net_thresh_rescale, statistics = "ExpectedInfluence", 
                 nBoots = 1000, nCores = 8, type = "case")

library("ggplot2")
plot(boots, statistics = "ExpectedInfluence") + 
  theme(legend.position = "none")

unnamed-chunk-9-1

In addition to expected influence, thanks to the contribution of Alex Christensen, randomized shortest paths betweenness centrality (RSPBC) and hybrid centrality are now also supported, which can be called using statistics = c("rspbc", "hybrid").

Relative importance networks

Relative importance networks can be seen as a re-parameterization of the GGM in which directed edges are used to quantify the (relative) contribution in predictability of each variable on other variables. This can be useful mainly for two reasons: (1) the relative importance measures are very stable, and (2) centrality indices based on relative importance networks have more natural interpretations. For example, in-strength of non-normalized relative importance networks equals \(R^2\). Bootnet has been updated to support directed networks, which allows for support for estimating relative importance networks. The estimation may be slow with over 20 variables though. Using only the first 10 variables of the BFI dataset we can compute the relative importance network as follows:

net_relimp <- estimateNetwork(bfi[,1:10],
              default = "relimp",
              normalize = FALSE)

As the relative importance network can be seen as a re-parameterization of the GGM, it makes sense to first estimate a GGM structure and subsequently impose that structure on the relative importance network estimation. This can be done using the structureDefault argument:

net_relimp2 <- estimateNetwork(bfi[,1:10],
              default = "relimp",
              normalize = FALSE,
              structureDefault = "ggmModSelect",
              stepwise = FALSE # Sent to structureDefault function
    )
Layout <- qgraph::averageLayout(net_relimp, net_relimp2)
layout(t(1:2))
plot(net_relimp, layout = Layout, title = "Saturated")
plot(net_relimp2, layout = Layout, title = "Non-saturated")

unnamed-chunk-12-1

The difference is hardly visible because edges that are removed in the GGM structure estimation are not likely to contribute a lot of predictive power.

Time-series analysis

The LASSO regularized estimation of graphical vector auto-regression (VAR) models, as implemented in the graphicalVAR package, is now supported in bootnet! For example, we can use the data and codes supplied in the supplementary materials of our recent publication in Clinical Psychological Science to obtain the detrended data object Data. Now we can run the graphical VAR model (I use nLambda = 8 here to speed up computation, but higher values are recommended and are used in the paper):

# Variables to include:
Vars <- c("relaxed","sad","nervous","concentration","tired","rumination",
          "bodily.discomfort")

# Estimate model:
gvar <- estimateNetwork(
  Data, default = "graphicalVAR", vars = Vars,
  tuning = 0, dayvar = "date", nLambda = 8
)

We can now plot both networks:

Layout <- qgraph::averageLayout(gvar$graph$temporal, 
                                gvar$graph$contemporaneous)
layout(t(1:2))
plot(gvar, graph = "temporal", layout = Layout, 
     title = "Temporal")
plot(gvar, graph = "contemporaneous", layout = Layout, 
     title = "Contemporaneous")

unnamed-chunk-16-1

The bootstrap can be performed as usual:

gvar_boot <- bootnet(gvar, nBoots = 100, nCores = 8)

To plot the results, we need to make use of the graph argument:

plot(gvar_boot, graph = "contemporaneous", plot = "interval")

unnamed-chunk-19-1

Updates to bootstrapping methods

Splitting edge accuracy and model inclusion

Some of the new default sets (ggmModSelect, TMFG and LoGo) do not rely on regularization techniques to pull estimates to zero. Rather, they first select a set of edges to include, then estimate a parameter value only for the included edges. The default plotting method of edge accuracy, which has been updated to include the means of the bootstraps, will then not accurately reflect the range of parameter values. Making use of arguments plot = "interval" and split0 = TRUE, we can plot quantile intervals only for the times the parameter was not set to zero, in addition to a box indicating how often the parameter was set to zero:

# Agreeableness items only to speed things up:
net_modSelect_A <- estimateNetwork(bfi[,1:5], 
              default = "ggmModSelect",
              stepwise = TRUE,
              corMethod = "cor")

# Bootstrap:
boot_modSelect <- bootnet(net_modSelect_A, nBoots = 100, nCores = 8)

# Plot results:
plot(boot_modSelect, plot = "interval", split0 = TRUE)

unnamed-chunk-14-1

This shows that the edge A1 (Am indifferent to the feelings of others) – A5 (Make people feel at ease) was always removed from the network, contradicting the factor model. The transparency of the intervals shows also how often an edge was included. For example, the edge A1 – A4 was almost never included, but when it was included it was estimated to be negative.

Accuracy of directed networks

Accuracy plots of directed networks are now supported:

net_relimp_A <- estimateNetwork(bfi[,1:5],
              default = "relimp",
              normalize = FALSE)
boot_relimp_A <- bootnet(net_relimp_A, nBoots = 100, nCores = 8)
plot(boot_relimp_A, order = "sample")

unnamed-chunk-21-1

Simulation suite

The netSimulator function and accompanying plot method have been greatly expanded. The most basic use of the function is to simulate the performance of an estimation method given some network structure:

Sim1 <- netSimulator(
  input = net_modSelect, 
  dataGenerator = ggmGenerator(),
  nCases = c(100,250,500,1000),
  nCores = 8,
  nReps = 100,
  default = "ggmModSelect",
  stepwise = FALSE)

plot(Sim1)

unnamed-chunk-22-1

Instead of keeping the network structure fixed, we could also use a function as input argument to generate a different structure every time. The updated genGGM function allows for many such structures (thanks to Mark Brandt!). In addition, we can supply multiple arguments to any estimation argument used to test multiple conditions. For example, perhaps we are interested in investigating if the stepwise model improvement is really needed in 10-node random networks with \(25\%\) sparsity:

Sim2 <- netSimulator(
input = function()bootnet::genGGM(10, p = 0.25, graph = "random"),  
  dataGenerator = ggmGenerator(),
  nCases = c(100,250,500,1000),
  nCores = 8,
  nReps = 100,
  default = "ggmModSelect",
  stepwise = c(FALSE, TRUE))

plot(Sim2, color = "stepwise")

unnamed-chunk-23-1

which shows slightly better specificity at higher sample sizes. We might wish to repeat this simulation study for 50% sparse networks:

Sim3 <- netSimulator(
  input = function()bootnet::genGGM(10, p = 0.5, graph = "random"),  
  dataGenerator = ggmGenerator(),
  nCases = c(100,250,500,1000),
  nCores = 8,
  nReps = 100,
  default = "ggmModSelect",
  stepwise = c(FALSE, TRUE))

plot(Sim3, color = "stepwise")

unnamed-chunk-24-1

The results object is simply a data frame, meaning we can combine our results easily and use the plot method very flexibly:

Sim2$sparsity <- "sparsity: 0.75"
Sim3$sparsity <- "sparsity: 0.50"
Sim23 <- rbind(Sim2,Sim3)
plot(Sim23, color = "stepwise", yfacet = "sparsity")

unnamed-chunk-25-1

Investigating the recovery of centrality (correlation with true centrality) is also possible:

plot(Sim23, color = "stepwise", yfacet = "sparsity",
    yvar = c("strength", "closeness", "betweenness", "ExpectedInfluence"))

unnamed-chunk-26-1

Likewise, we can also change the x-axis variable:

Sim23$sparsity2 <- gsub("sparsity: ", "", Sim23$sparsity)
plot(Sim23, color = "stepwise", yfacet = "nCases",
    xvar = "sparsity2", xlab = "Sparsity")

unnamed-chunk-27-1

This allows for setting up powerful simulation studies with minimal effort. For example, inspired by the work of Williams and Rast, we could compare such an un-regularized method with a regularized method and significance thresholding:

Sim2b <- netSimulator(
  input = function()bootnet::genGGM(10, p = 0.25, graph = "random"),  
  dataGenerator = ggmGenerator(),
  nCases = c(100,250,500,1000),
  nCores = 8,
  nReps = 100,
  default = "EBICglasso",
  threshold = c(FALSE, TRUE))

Sim2c <- netSimulator(
  input = function()bootnet::genGGM(10, p = 0.25, graph = "random"),  
  dataGenerator = ggmGenerator(),
  nCases = c(100,250,500,1000),
  nCores = 8,
  nReps = 100,
  default = "pcor",
  threshold = "sig",
  alpha = c(0.01, 0.05))

# add a variable:
Sim2$estimator <- paste0("ggmModSelect", 
                         ifelse(Sim2$stepwise,"+step","-step"))
Sim2b$estimator <- paste0("EBICglasso",
                    ifelse(Sim2b$threshold,"+threshold","-threshold"))
Sim2b$threshold <- NULL
Sim2c$estimator <- paste0("pcor (a = ",Sim2c$alpha,")")

# Combine:
Sim2full <- dplyr::bind_rows(Sim2, Sim2b, Sim2c)

# Plot:
plot(Sim2full, color = "estimator")

unnamed-chunk-28-1

Replication simulator

In response to a number of studies aiming to investigate the replicability of network models, I have implemented a function derived from netSimulator that instead simulates two datasets from a network model, treating the second as a replication dataset. This allows researchers to investigate what to expect when aiming to replicate effect. The input and usage is virtually identical to that of netSimulator:

SimRep <- replicationSimulator(
  input = function()bootnet::genGGM(10, p = 0.25, graph = "random"),  
  dataGenerator = ggmGenerator(),
  nCases = c(100,250,500,1000),
  nCores = 8,
  nReps = 100,
  default = "ggmModSelect",
  stepwise = c(FALSE, TRUE))

plot(SimRep, color = "stepwise")

unnamed-chunk-29-1

Future developments

As always, I highly welcome bug reports and code suggestions on Github. I would also welcome any volunteers willing to help work in this project. This work can include adding new default sets, but also overhauling the help pages or other work. Please contact me if you are interested!

New features in qgraph 1.5

Written by Sacha Epskamp.

While the developmental version is routinely updated, I update the stable qgraph releases on CRAN less often. The last major update (version 1.4) was 1.5 years ago. After several minor updates since, I have now completed work on a new larger version of the package, version 1.5, which is now available on CRAN. The full list of changes can be read in the NEWS file. In this blog post, I will describe some of the new functionality.

New conservative GGM estimation algorithms

Recently, there has been some debate on the specificity of EBICglasso in exploratory estimation of Gaussian graphical models (GGM). While EBIC selection of regularized glasso networks works well in retrieving network structures at low sample sizes, Donald Williams and Philippe Rast recently showed that specificity can be lower than expected in dense networks with many small edges, leading to an increase in false positives. These false edges are nearly invisible under default qgraph fading options (also due to regularization), and should not influence typical interpretations of these models. However, some lines of research focus on discovering the smallest edges (e.g., bridge symptoms or environmental edges), and there has been increasing concerns regarding the replicability of such small edges. To this end, qgraph 1.5 now includes a warning when a dense network is selected, and includes two new more conservative estimation algorithms: thresholded EBICglasso estimation and unregularized model selection.

Thresholded EBICglasso

Based on recent work by Jankova and Van de Geer (2018), a low false positive rate is guaranteed for off-diagonal (\(i \not= j\)) precision matrix elements (proportional to partial correlation coefficients) \(\kappa_{ij}\) for which:

\[
|\kappa_{ij}| > \frac{\log p (p-1) / 2}{\sqrt{n}}.
\]
The option threshold = TRUE in EBICglasso and qgraph(..., graph = "glass") now employs this thresholding rule by setting edge-weights to zero that are not larger than the threshold in both in the returned final model as well in the EBIC computation of all considered models. Preliminary simulations indicate that with this thresholding rule, high specificity is guaranteed for many cases (an exception is the case in which the true model is not in the glassopath, at very high sample-sizes such as \(N > 10{,}000\)). A benefit of this approach over the unregularized option described above is that edge parameters are still regularized, preventing large visual overrepresentations due to sampling error.

The following codes showcase non-thresholded vs thresholded EBICglasso:

library("qgraph")
library("psych")
data(bfi)
bfiSub <- bfi[,1:25]
layout(t(1:2))
g1 <- qgraph(cor_auto(bfiSub), graph = "glasso", sampleSize = nrow(bfi),
             layout = "spring", theme = "colorblind", title = "EBICglasso", 
             cut = 0)
g2 <- qgraph(cor_auto(bfiSub), graph = "glasso", sampleSize = nrow(bfi), 
       threshold = TRUE, layout = g1$layout, theme = "colorblind", 
       title = "Thresholded EBICglasso", cut = 0)

Picture1

While the thresholded graph is much sparser, that does not mean all removed edges are false positives. Many are likely reflecting true edges.

Unregularized Model Search

While the LASSO has mostly been studied in high-dimensional low-sample cases, in many situations research focuses on relatively low-dimensional (e.g., 20 nodes) settings with high sample size (e.g., \(N > 1{,}000\)). To this end, it is arguable if regularization techniques are really needed. In the particular case of GGMs, one could also use model selection on unregularized models in which some pre-defined edge-weights are set to zero. It has been shown that (extended) Bayesian information criterion (EBIC) selection of such unregularized models selects the true model as \(N\) grows to \(\infty\) (Foygel and Drton, 2010). The new function ggmModSelect now supports model search of unregularized GGM models, using the EBIC exactly as it is computed in the Lavaan package. The hypertuningparameter is set by default to \(0\) (BIC selection) rather than \(0.5\), as preliminary simulations indicate \(\gamma = 0\) shows much better sensitivity while retaining high specificity. By default, ggmModSelect will first run the glasso algorithm for \(100\) different tuning parameters to obtain \(100\) different network structures. Next, the algorithm refits all those networks without regularization and picks the best. Subsequently, the algorithm adds and removes edges until EBIC can no longer be improved. The full algorithm is:

  1. Run glasso to obtain 100 models
  2. Refit all models without regularization
  3. Choose the best according to EBIC
  4. Test all possible models in which one edge is changed (added or removed)
  5. If no edge can be added or changed to improve EBIC, stop here
  6. Change the edge that best improved EBIC, now test all other edges that would have also lead to an increase in EBIC again
  7. If no edge can be added or changed to improve EBIC, go to 4, else, go to 6.

When stepwise = FALSE, steps 4 to 7 are ignored, and when considerPerStep = "all", all edges are considered at every step. The following codes showcase the algorithm:

modSelect_0 <- ggmModSelect(cor_auto(bfiSub), nrow(bfi), gamma = 0, nCores = 8)
modSelect_0.5 <- ggmModSelect(cor_auto(bfiSub), nrow(bfi), gamma = 0.5, nCores = 8)
layout(t(1:2))
g3 <- qgraph(modSelect_0$graph, layout = g1$layout, theme = "colorblind", 
       title = "ggmModSelect (gamma = 0)", cut = 0)
g4 <- qgraph(modSelect_0.5$graph, layout = g1$layout, theme = "colorblind", 
       title = "ggmModSelect (gamma = 0.5)", cut = 0)

Picture2

Note that this algorithm is very slow in higher dimensions (e.g., above 30-40 nodes), in which case only the regular EBICglasso, thresholded EBICglasso, or setting stepwise = FALSE are feasible. Of note, centrality analyses, especially of the more stable strength metric, are hardly impacted by the estimation method:

centralityPlot(
  list(EBICglasso=g1,
       EBICglasso_threshold=g2,
       ggmModSelect_0 = g3,
       ggmModSelect_0.5 = g4
       ))

Picture3

Both thresholded EBICglasso and ggmModSelect are implemented in the development version of bootnet, which will be updated soon to CRAN as well. Preliminary simulations show that both guarantee high specificity, while losing sensitivity. Using ggmModSelect with \(\gamma = 0\) (BIC selection) shows better sensitivity and works well in detecting small edges, but is slow when coupled with stepwise model search, which may make bootstrapping hard. I encourage researchers to investigate these and competing methods in large-scale simulation studies.

Which estimation method to use?

Both new methods are much more conservative than the EBICglasso, leading to drops in sensitivity and possible misrepresentations of the true sparsity of the network structure. For exploratory hypothesis generation purposes in relatively low sample sizes, the original EBICglassso is likely to be preferred. In higher sample sizes and with a focus point on identifying small edges, the conservative methods may be preferred instead. There are many more GGM estimation procedures available in other R packages, and detailed simulation studies investigating which estimator works best in which case are now being performed in multiple labs. I have also implemented simulation functions in the developmental version of bootnet to aide studying these methods, which I will describe in an upcoming blog post.

Flow diagrams

Sometimes, researchers are interested in the connectivity of one node in particular, which can be hard to see in the Fruchterman-Reingold algorithm, especially when the connections to that one node are weak. The new flow function, which I developed together with Adela Isvoranu, can be used to place nodes in such a way that connections of one node are clearly visible. The function will place the node of interest to the left, then in vertical levels nodes connected to the node of interest with 1, 2, 3, etcetera edges. Edges between nodes in the same level are displayed as curved edges. For example:

flow(g2, "N3", theme = "colorblind", vsize = 4)

Picture4

Expected influence

The centrality index expected influence is now returned by centrality() and can be plotted using centralityPlot(), although it has to be requested using include. In addition, the plots can now be ordered by one of the indices:

centralityPlot(g2, include = c("Strength","ExpectedInfluence"),
               orderBy = "ExpectedInfluence")

Picture5

Note, however, that the BFI network is not an optimal network to compute expected influence on, as some variables are (arbitrarily) scored negatively. It is best to compute expected influence on a network in which higher scores on all nodes have the same interpretation (e.g., symptoms in which higher = more severe).

Future developments

As always, I highly welcome bug reports and code suggestions on Github. In addition, I will also update the bootnet package soon and write a separate blog post on its latest additions.

References

Jankova, J., and van de Geer, S. (2018) Inference for high-dimensional graphical models. In: Handbook of graphical models (editors: Drton, M., Maathuis, M., Lauritzen, S., and Wainwright, M.). CRC Press: Boca Raton, Florida, USA.

Foygel, R., & Drton, M. (2010). Extended Bayesian information criteria for Gaussian graphical models. In Advances in neural information processing systems (pp. 604-612).

Recent developments on the performance of graphical LASSO networks

This post was written by Sacha Epskamp, Gaby Lunansky, PIa Tio and Denny Borsboom, on behalf of the Psychosystems lab group.

Currently, many network models are estimated using LASSO regularization. For ordinal and continuous variables, a popular option is to use the graphical LASSO (GLASSO), in which the network is estimated by estimating a sparse inverse of the variance-covariance matrix. For a description of this method, see our recently published tutorial paper on this topic. Last week we were informed of the new pre-print archive of Donald Williams and Philippe Rast, who show that under certain conditions the GLASSO, coupled with EBIC model selection (EBICglasso), can have lower specificity than expected. Specificity is defined as: 1 – the probability of estimating an edge that is not in the true model. Thus, the results of Williams and Rast indicate that in certain cases edges may be retrieved that are not representative of true edges. This is interesting, as LASSO methods usually show high specificity all-around.

We have investigated these claims and replicated the findings using the independent implementations of EBICglasso in both the qgraph and huge packages. This effect seems to only occur in network structures that are very dense (much denser than typical simulations and proofs regarding GLASSO investigate) and feature many very small edges, blurring the line between true and false effects. We already noted a noteworthy lower specificity than expected in our tutorial paper when discussing tools one may use to investigate these properties for a given network structure, and hypothesized that this may be an effect of a dense network structure with many small edges. The results of Williams and Rast seem to confirm this speculation by taking this effect to the extreme.

What is the problem and how can it be solved?

The main problem lies in that in some cases, the GLASSO algorithm may retrieve very small false edges, which are not present in the true network under which the data are simulated. There are two reasons why this may occur:

  1. For certain tuning parameters, GLASSO identifies the correct model. However, the EBIC or cross-validation selects the wrong tuning parameter.
  2. GLASSO is incapable of selecting the right model for any tuning parameter even with infinite sample size, and as a result any selection procedure is doomed to fail.

Case 1 is interesting, and may be the reason why regularized EBICglasso estimation has high specificity but does not always converge to the true model. This case may lead to a drop in specificity to about 0.6 – 0.7 and seems to occur in dense networks. We hypothesize that the added penalty biases parameters so much EBIC tends to select a slightly denser model to compensate. We are currently working out ways to improve tuning parameter selection, including running more simulations on preferred EBIC hyperparameters in denser networks than usually studied, theoretically derived thresholding rules, and non-regularized model selection procedures.

Case 2 seems to be the case in which Williams and Rast operate, which is harder to tackle. To exemplify this problem, we generated a 20% sparse graph using the codes kindly provided by Williams and Rast, and entered the true implied variance-covariance matrix into glasso. Inverting this true matrix and thresholding at an arbitrary small number returns the correct graph. However, at no tuning parameter the GLASSO algorithm is capable of retrieving this graph:

fig1

 

The codes for this example are available online. It is striking that at no point GLASSO captures the true model, even though the true variance-covariance matrix (corresponding to N = infinite) was used as input. This is not common. Using the exact same codes, an 80% dense network parameterized as according to Tin & Li (2011), shows a strikingly different picture:

fig2

showing that the true model is retrieved for some GLASSO tuning parameters. We are currently investigating whether we can identify cases in which GLASSO is incapable of retrieving the correct model at infinite sample size, and hypothesize this may be due to (a) very low partial correlations used in the generating structure, leading to a blurred line between true and false edges, and (b) relatively large partial correlations or implied marginal correlations, leading to inversion problems. We hope anyone may have feedback on why this set of networks fails in GLASSO estimation and how such troublesome cases may be discovered from empirical data.

Why can sparsity impact GLASSO performance?

Many network estimation techniques, including the GLASSO, rely on the assumption of sparsity, also termed the bet on sparsity. That is, these methods work very well assuming the true model is sparse (e.g. contains relatively few edges). However, when the true model is not sparse, LASSO may lead to unexpected results. Most simulation studies and mathematical proofs studied network retrieval in sparse settings. As more psychological network studies are published, however, one consistent finding seems to be that psychological networks are less sparse than expected. At high sample sizes, most studies identify many small edges near zero. This finding led to a change in the qgraph default for the lower bound on the sparsity of networks compared (lambda.min.ratio), which when set to 0.1 leads to networks being estimated that are too sparse. The default is now 0.01, which leads to better sensitivity but also a small decrease in specificity. Williams and Rast set this value to 0.001 in some cases, which should mark again an increase in sensitivity but perhaps also a decrease in specificity.

Are graphical LASSO networks unstable?

Fortunately, even in the extreme cases discussed by Williams and Rast, low specificity arises exclusively from the fact that very weak edges might be incorrectly included in the graph. In a qgraph plot, for instance, these would be nearly invisible. As a result, the general structure or substantive interpretation of the network is not affected. GLASSO results, like other statistical methods, are not intrinsically stable or unstable, and follow up stability and accuracy checks should always be performed after fitting a network. If the estimated network is stable and accurate, there is no reason to doubt the interpretation based on the work of Williams and Rast, except when the interpretation relied on thorough investigation of the weakest edges in the network (possibly made visible by removing the fading of edges in displaying them). Of note, the bootstrapping methods also allow for checking if an edge is significantly different from zero. We argue against this procedure, as it would lead to double thresholding a network and a severe loss of sensitivity. But if the goal is to retrieve small edges, it may be warranted to perform the check if zero is contained in the confidence interval of the regularized edge parameter.

Does this affect the replicability of networks?

Replicability and model selection are related but do not refer to the same methodological criterium. The recent findings regarding the GLASSO algorithm appeal to model quality: when simulating data from a ‘true model’, we can check whether results from an estimation method converge to this true model with increasing sample size (Borsboom et al., in press). Under certain circumstances, GLASSO algorithm retrieves very small edges which are not present in the true model under which the data are simulated (e.g. the edges are false-positives).  Replicability concerns finding the same results in different empirical samples. Naturally, if the GLASSO algorithm indeed retrieves small false-positive edges, this will affect replicability of these small edges over different empirical samples. However, since these edges shouldn’t be there in the first place, the origin of the issue does not concern replicability but model selection. While we investigate these results further, an important take-home message of the work of Williams and Rast is that researchers who aim to use network models to identify very small effects (when it is plausible to assume that the ‘true model’ might be dense with many small edges), may wish to consider other estimation methods such as node-wise estimation or non-regularized model selection discussed below.

Are all networks estimated using the graphical LASSO?

No. GLASSO is only one of many options that are available to estimate undirected network models from data. GLASSO is often used because it (a) is fast, (b) has high power in detecting edges, (c) selects the entire model in one step, and (d) only requires a covariance matrix as input, allowing to compute networks from polychoric correlations (ordinal data) and facilitating reproducibility of results. Outside the GLASSO, another option for regularized network estimation is to perform (logistic) regressions of every variable on all other variables, and combining the resulting regression parameters (proportional to edge weights) in the network. This is called nodewise estimation, and is at the core of several often used estimation methods such as the adaptive LASSO, IsingFit, and mixed graphical models (MGM). While nodewise regression estimation has less power than GLASSO, there are cases in which GLASSO fails but nodewise regressions do not (Ravikumar et al., 2008). Our preliminary simulations showed that nodewise estimation methods retain high specificity in the particular case identified by Williams and Rast where the specificity of GLASSO drops.  In addition to LASSO estimation methods that aim to retrieve sparse models, other regularization techniques exist that instead aim to estimate low-rank dense networks. Still another option is to use elastic-net estimation, which compromises between estimating sparse and dense networks.

Of note, and as Williams and Rast describe clearly, networks can also be estimated using non-regularized methods. Two main methods by which this can be done are (a) estimating a saturated network and subsequently removing edges that are below some threshold (e.g., based on significance tests, as argued by Williams and Rast), and (b) selecting between several estimated models in which edge-parameters are estimated without regularization. Both have already been discussed in the blog post that also introduced GLASSO estimation in the qgraph package. By using the ‘threshold’ argument in qgraph and bootnet one can threshold edges based on significance (possibly after correcting for multiple tests) or false discovery rates. A downside of thresholding, however, is that no model selection is performed, which leads to estimated edges that do not take into account other edges that are set to zero. A (rather slow) step-wise model selection procedure has been implemented in qgraph in the function ‘findGraph’. We are currently working on a faster non-regularized model selection procedure, in which the GLASSO algorithm is first used to generate a limited set of models (e.g., 100), which are subsequently refitted without regularization. Preliminary simulations show that this procedure is more conservative than regularized GLASSO and often converges to the true model with high sample sizes.

The bootnet package contains many default sets to facilitate computing networks using different estimation procedures in a unified framework. The table below provides an overview of methods currently implemented as default sets and when they can be used. If you would like us to implement more default sets, please let us know!

Network table

References

Borsboom, D., Robinaugh, D. J.,  The Psychosystems Group, Rhemtulla M. & Cramer, A. O. J. (in press). Robustness and replicability of psychopathology networks. World Psychiatry.

Foygel, R., & Drton, M. (2010). Extended Bayesian information criteria for Gaussian graphical models. Advances in neural information processing systems, 604-612.

Ravikumar, P., Raskutti, G., Wainwright, M. J.,  & Yu, B. (2008). High-dimensional covariance estimation by minimizing l1-penalized log-determinant divergence. Electronic Journal of Statistics, 5:935–980.

Yin, J., & Li, H. (2011). A sparse conditional gaussian graphical model for analysis of genetical genomics data. The annals of applied statistics, 5(4), 2630-2650.

 

Network Model Selection Using qgraph 1.3

In this post I will describe some of the new functionality in the 1.3 version of qgraph. The qgraph package is aimed at visualizing correlational structures and, more recently, at visualizing the results of network analysis applied to, in particular but not limited to, psychology. The application of network analysis to psychology has been the core interest of the Psychosystems research group in which the qgraph package has been developed. In version 1.2.4 and 1.2.5 new functions were introduced to compute graph descriptives of weighted graphs and in version 1.3, the visualization techniques are extended with model selection tools for estimating the graph structure. As such, qgraph is no longer just a visualization tool but a full suite of methods for applying network analysis from data to estimating a graph structure and assessing graph descriptives. Version 1.3 is on CRAN now and will be updated on mirrors shortly. To install and check if you have version 1.3 run:

install.packages("qgraph")
packageDescription("qgraph")$Version

Alternatively, you can install the developmental version from GitHub:

library("devtools")
install_github("sachaepskamp/qgraph")

Network models

In qgraph generally two types of network models can be estimated for multivariate Gaussian data: the correlation network and the partial correlation network. In graph literature a correlation network is typically denoted with bidirectional edges whereas a partial correlation network is typically denoted with undirected edges:

 

In qgraph, bidirectional edges can be plotted but are omitted by default to make the graph clearer. Both the correlation network and the partial correlation network are saturated models if fully connected and degrees of freedom can be obtained by removing edges. Thus, a network model simply means a correlation or partial correlation network in which some edges are set to be equal to zero; model selection means that we attempt to identify which edges should be set to zero. This can be done broadly in three ways:

  • Thresholding: simply remove all edges that are weaker than some threshold, such as all edges that are not significantly different from zero.
  • Model search: Continuously estimate models in which some edges are set to zero and stepwise comparisons or an information criterium to decide which model is optimal
  • LASSO estimation: jointly estimate parameter values and the absence of edges using regularization techniques (more on this below).

Correlation networks

A correlation network is a model for marginal independence relationships; two unconnected nodes imply a marginal independence between two nodes. For instance, the graph above implies the following independence relationship:
\[
A \!\perp\!\!\!\perp C.
\]
We can use the schur complement to see the relationship between \(A\) and \(C\) after conditioning on \(B\):
\[
\mathrm{Cov}\left( A, C \mid B \right) = \mathrm{Cov}\left( A, C \right) - \mathrm{Cov}\left( A, B \right) \mathrm{Var}\left( B \right)^{-1} \mathrm{Cov}\left(B, C \right) \not= 0,
\]
Since both covariances with \(B\) are nonzero and the inverse of a variance is always nonzero for any random variable. Thus, the correlation network above does not only imply a marginal independence, it also implies a conditional dependency relationship:
\[
A \not\!\perp\!\!\!\perp C \mid B.
\]
Such an induced dependency can only occur in very specific causal structures that might not correspond to our general notion of nodes interacting with each other. In general, we expect that if \(A\) and \(B\) are correlated and \(B\) and \(C\) are correlated than \(A\) and \(C\) are also correlated. In general, we would expect correlation networks to often be saturated, especially when constructing networks based on psychometric tests that are designed to contain correlated items. As such, we generally do not apply model selection to correlation networks.

Even though correlation networks are not that interesting from a modeling perspective they are very interesting in the information they portray about the structure of a dataset. Epskamp et al. (2012) described how qgraph and a correlation network of the Big 5 can be used from a psychometric point of view to simply visualize the correlational patterns in the dataset; factors should emerge as clusters of nodes. Cramer et al. (2012) interpreted the same Big 5 network in the context of the network perspective of psychology. Furthermore, correlation networks have also been described and used by Cramer, Borsboom, Aggen and Kendler (2011), Schmittmann et al. (2013) and Ruzzano, Borsboom and Geurts (2014).

Partial correlation network

The partial correlation network is a model for conditional independence relationships. In the above undirected network there is no edge between nodes \(A\) and \(C\), which implies that:
\[
A \!\perp\!\!\!\perp C \mid B.
\]
Thus, the partial covariance between \(A\) and \(C\) after conditioning on all other nodes in the network equals zero. Now, by definition, \(\mathrm{Cov}\left( A, C \mid B \right)\) is equal to zero and thus:
\[
\mathrm{Cov}\left( A, C \right) = \mathrm{Cov}\left( A, B \right) \mathrm{Var}\left( B \right)^{-1} \mathrm{Cov}\left(B, C \right) \not= 0,
\]
which implies the following marginal relationship:
\[
A \not\!\perp\!\!\!\perp C.
\]
Therefore, the partial correlation network implies the opposite kind of independence relationships between unconnected nodes; unconnected nodes are conditionally independent but marginally still allowed to correlate. The edges in the network can be understood as pairwise interactions between two nodes This allows us to model interaction effects between pairs of nodes while still allowing all nodes to correlate. Such a network of pairwise interactions is termed a pairwise Markov Random Field (Lauritzen, 1996). If the data is assumed multivariate normally distributed then the model is called a Gaussian Graphical Model, Gaussian Random field, concentration network or partial correlation network; the network structure is then directly related to the inverse of the covariance matrix with zeroes indicating missing edges.

We will use the term partial correlation network to describe this class of networks. In a partial correlation network, the strength of edges—the partial correlation between two nodes after conditioning on all other nodes—is directly equivalent to the predictive quality between two nodes that would be obtained in multiple regression. Thus, the strength of an edge can be seen as how well two nodes predict each other; in the above network knowing \(A\) tells you something about \(B\) and in turn knowing \(B\) tells you something about \(C\). The network can also be seen as large scale mediation analysis, as in the above network the predictive quality of \(A\) on \(C\) is mediated by \(B\).

Network estimation using qgraph

We will analyze the BFI dataset from the psych package, of which the correlation network had previously been described on Joel Cadwell’s blog. This dataset contains 25 items on the five central personality traits: openness to Experience, conscientiousness, extroversion, agreeableness and neuroticism. For each trait there are five indicators, such as “make friends easily” for extroversion and “make people feel at ease” for agreeableness. We can load the data into R as follows:

library("psych")
data(bfi)

We are only interested in the interactions between items of the big five so we only subset the 25 indicators:

bfiSub <- bfi[,1:25]

These responses are measured on a five point scale, and thus the assumption of normality is violated by default. As a solution, we compute not the Pearson correlations but the polychoric correlations instead. To do this, we can use the cor_auto() function in qgraph that automatically detects if data looks ordinal and runs the lavCor() function from the lavaan package to compute Pearson, polychoric or polyserial correlations where needed:

library("qgraph")
bfiCors <- cor_auto(bfiSub)
## Variables detected as ordinal: A1; A2; A3; A4; A5; C1; C2; C3; C4; C5; E1; E2; E3; E4; E5; N1; N2; N3; N4; N5; O1; O2; O3; O4; O5

To help plot our graph we can also load more information on the meaning of each node:

Names <- scan("http://sachaepskamp.com/files/BFIitems.txt",what = "character", sep = "\n")
Groups <- rep(c('A','C','E','N','O'),each=5)

Now we can plot the (saturated) correlation network. To do so, we can simply send the correlation matrix to the qgraph() function and use some more arguments to make the plot look good. In addition, we can set graph = "cor" to tell qgraph that the input is, in fact, a correlation matrix (this doesn’t do much other than check if the matrix is positive definite):

corGraph <- qgraph(bfiCors, layout = "spring", graph = "cor",
       nodeNames = Names, groups = Groups, legend.cex = 0.3,
      cut = 0.3, maximum = 1, minimum = 0, esize = 20,
      vsize = 5, repulsion = 0.8)

In this network each node represents one of the 25 items in the BFI dataset and edges represent the strength of correlation between them; green edges represent positive correlations, red edges negative correlations and the wider and more saturated an edge the stronger the correlation. The argument layout = "spring" forced qgraph to use a weighted version of the Fruchterman and Reingold (1991) algorithm to place strongly correlated nodes together. The nodeNames, groups and legend.cex arguments controlled the coloring and the legend placed next to the graph, the vsize argument the size of the nodes and the repulsion argument the repulsion ratio used in the Fruchterman-Reingold algorithm.

The minimum argument causes all edges with a value below the minimum to not be drawn (these edges are only made invisible but still taken into account when computing graph descriptives), the maximum argument sets an upper bound for the width of edges to scale to and the cut argument splits scaling in saturation and width; edges above the cut value are fully saturated and scale in width whereas edges below the cut value are small and scale in color. By default qgraph will choose these values to optimally plot a graph (see the manual for more details); to know which values qgraph used you could use the argument details = TRUE.

The above correlation network overall nicely retrieves the five factor structure; as expected factors emerge as clusters of correlations. To compute the saturated partial correlation network we can set the argument graph = "pcor":

pcorGraph <- qgraph(bfiCors, layout = corGraph$layout, graph = "pcor",
      nodeNames = Names, groups = Groups, legend.cex = 0.3, 
      cut = 0.1, maximum = 1, minimum = 0, esize = 20,
      vsize = 5)

Now we set the layout argument to be equal to the layout of the correlation network and the cut value to be lower as partial correlations are generally weaker than correlations. We can see that the partial correlation network generally returns the same pattern of interactions but also that some edges—such as between N4 and N1—are much weaker in the partial correlation network than in the correlation network.

Thresholding

Both the networks shown above are fully saturated; even though some (partial) correlations are so weak they are practically invisible both networks are fully connected. The aim of model selection is to identify which connections are zero, as these values indicate (conditional) independence relationships. A straightforward way of doing this is by thresholding; setting all edges that are not significantly different from zero to be equal to zero.

To use thresholding in qgraph we can use the threshold argument, which works different than minimum in that it removes edges from the network, influencing the network descriptives. The threshold argument can be set to both a value or a string indicating the type of adjustment for multiple tests to be used, which corresponds to the adjust argument of corr.test function in the psych package. In addition to the options psych provides the threshold argument can be set to locfdr to use the local FDR measure provided by the fdrtool package and used in the FDRnetwork function in qgraph. For instance, we can remove all non-significant edges using the Holm correction on the partial correlation network as follows:

holmGraph <- qgraph(bfiCors, layout = corGraph$layout, graph = "pcor",
      threshold = "holm", sampleSize = nrow(bfi),
      nodeNames = Names, groups = Groups, legend.cex = 0.3, 
      cut = 0.1, maximum = 1, minimum = 0, esize = 20,
      vsize = 5)

Which makes the network much cleaner and more interpretable.

Model search

While thresholding can be insightful it is a rather crude way of model selection; we only remove the non-significant edges and do not look at how other edge values change nor at how well the model without certain edges explains the data. To accomplish this we need model selection. We are particularly interested in the question: what model explains the data appropriately while being simple (having few edges).

One way to accomplish this is by evaluating all possible models and comparing them on some information criterium—such as the BIC—that weights the likelihood against the sparsity. For a three-node network all possible models are the following:

 

The connections between models indicate nesting relationships; a model with only an edge between \(A\) and \(B\) is nested in the model with both edges between \(A\) and \(B\) and \(B\) and \(C\). The number of possible models is \(2^{p(p-1)/2}\) with \(p\) being the amount of nodes in the network. It will be clear that in the case of a three node network it is entirely tractable to estimate \(8\) models and compare the BIC values but for the case of \(25\) nodes it becomes much impossible to compare all possible models. Thus, a search is performed to navigate through the model space. We could start at the bottom with the independence model and stepwise move higher to more complicated models by only comparing directly nested models, or start at the top with the saturated model and stepwise move lower to simpler models. The first routine is called step-up estimation and the second routine step-down estimation.

The findGraph function in qgraph can be used to search for an optimal (partial) correlation network. This function can be used to do brute search of all models, step-up comparison and step-down comparison. To compare models the Extended Bayesian Information Criterium (EBIC; Foygel & Drton, 2010) is used. By default, the findGraph function will start at the thresholded graph using Holm correction and iterate between step-up and step-down search until an optimal model is found:

optGraph <- findGraph(bfiCors, nrow(bfi), type = "pcor")
optimalGraph <- qgraph(optGraph, layout = corGraph$layout,
      nodeNames = Names, groups = Groups, legend.cex = 0.3, 
      cut = 0.1, maximum = 1, minimum = 0, esize = 20,
      vsize = 5)

 

LASSO estimation

Model selection can be a slow process as the amount of variables increases, and the stepwise process might get lost in the model space and not converge to the best model. For example, stepwise up and stepwise down methods often obtain a different model. A well established method for simultaneously estimating parameters and performing model selection on a partial correlation network is to employ the least absolute shrinkage and selection operator (LASSO; Tibshirani, 1996) regularization technique. A particularly fast variant of the LASSO for these kinds of networks is the graphical LASSO which directly estimates the inverse of the covariance matrix (Friedman, Hastie and Tibshirani, 2008) and has been implemented in the R package glasso.

In short, the graphical LASSO estimates not the maximum likelihood solution but a penalized maximum likelihood solution in which the likelihood is penalized for the sum of absolute parameter estimates. Because of this, the graphical LASSO estimates a model that still explains the data well with generally weaker parameter estimates and many parameter estimates of exactly zero. The amount of penalization is specified by a tuning parameter \(\lambda\). In qgraph, the EBICglasso function can be used to estimate a network using 100 different \(\lambda\) values and return of these the network that minimizes the extended Bayesian Information Criterium (EBIC; Chen, and Chen, 2008). Such a procedure has been established to accurately recover the network structure (Foygel & Drton, 2010; van Borkulo et al., 2014).

To automatically have qgraph run EBICglasso and estimate the model we need only set graph = "glasso":

glassoGraph <- qgraph(bfiCors, layout = corGraph$layout, 
      graph = "glasso", sampleSize = nrow(bfi),
      nodeNames = Names, groups = Groups, legend.cex = 0.3, 
      cut = 0.1, maximum = 1, minimum = 0, esize = 20,
      vsize = 5)

This method runs extremely fast and returns a sparse and interpretable network structure. This network shows some interesting properties. For example, neurotocism consists of two distinct clusters, an anxiety and a depression cluster, connected by the item “N3: Have frequent mood swings”. The Conscientiousness item “C5: Waste my time” and the extroversion item “E3: Know how to captivate people” act as a bridging nodes between the different personality trait clusters. Finally, most strong connections make perfectly sense when interpreted as a pairwise interaction.

Graph descriptives using qgraph

Now that we have estimated the partial correlation network in different ways we can look at how these networks compare in terms of graph descriptives. We are particularly interested in node centrality and clustering. To compare the estimated networks, we can use the centralityPlot and clusteringPlot functions which use ggplot2 to visualize the estimates:

centralityPlot(
  list(
    saturated = pcorGraph, 
    threshold = holmGraph,
    modelSearch = optimalGraph,
    glasso = glassoGraph)
  )

We can similarly compare the clustering coefficients:

clusteringPlot(
  list(
    saturated = pcorGraph, 
    threshold = holmGraph,
    modelSearch = optimalGraph,
    glasso = glassoGraph)
  )
## Warning: Removed 25 rows containing missing values (geom_point).

We can see that even though the graphs all are highly similar they can differ on centrality and clustering coefficients. However, generally all graphs are somewhat aligned; nodes that are highly central are so across all graphs.

References