Free summer school materials and videos

Last month, we hosted the first online version of our summer school on network analysis. All materials except solutions are now available on the OSF and – for a limited time (until October 31) – you can now get access via Eventbrite to 50+ video lectures for free.

Special thanks to everyone who helped make this possible: Sacha Epskamp, Adela Isvoranu, Ria Hoekstra, Denny Borsboom, Riet van Bork, Julian Burger, Jonas Haslbeck, Lourens Waldorp, Eiko Fried, Alessandra Mansueto, Karoline Huth, Jill de Ron, Adam Finnemann, Gaby Lunansky, Jolanda Van der Ree-Kossakowski, Pia Tio, and Lourens Waldorp.

The polarization within and across individuals: the hierarchical Ising opinion model

On May 7 2020 we published the paper The polarization within and across individuals: the hierarchical Ising opinion model’ in the Journal of Complex networks.

Polarization of opinions involves psychological processes as well as group dynamics. However, the interaction between the within individual dynamics of attitude formation and across person polarization is rarely studied. By modelling individual attitudes as Ising networks of attitude elements, and approximating this behaviour by the cusp singularity, we developed a fundamentally new model of social dynamics.

In this hierarchical model, agents behave either discretely or continuously depending on their attention to the issue. At the individual level the model reproduces the mere thought effect and resistance to persuasion. At the social level the model implies polarization and the persuasion paradox. We propose a new intervention for escaping polarization in bounded confidence models of opinion dynamics.

Han van der Maas, Jonas Dalege, and Lourens Waldorp

Psychological Methods, UvA

Job opportunities in Amsterdam!

This post was written by Sacha Epskamp, and cross-posted to the psychonetrics blog.

The old city center of Amsterdam – a location right next to the Institute for Advanced Studies which will partly host the PhD positions.

There are several exciting job vacancies in Amsterdam on topics very closely related to our research: one assistant professor position at the Psychological Methods Group, and several PhD positions hosted through the newly founded Center for Urban Mental Health. Note that PhD positions in the Netherlands are full-time 4-year paid jobs including the same benefits as any other job (e.g., pension buildup). Starting a PhD position in the Netherlands requires a Master’s degree. In this blog post, I will highlight some of the most relevant positions. Please consider applying if you are eligible, or forwarding these vacancies to eligible candidates you may know!

Assistant Professor of Psychological Methods

The first position I would like to point out is an assistant professor position at the Psychological Methods Group. The Psychological Methods group of the University of Amsterdam is one of the largest and most successful research and education centers in the field of psychological methods. In the past decade, several strong lines of research originated from this group, such as network modeling of psychological phenomena in the Psychosystems and the Psychonetrics lab groups, Bayesian statistical tools implemented in the open-source statistics program JASP, and adaptive testing implemented in the spin-off company Oefenweb. We have recently launched the increasingly popular Behavioral Data Science master program, which will be the focus of this assistant professorship as well.

Please note that this position does not come with a tenure-track (these do not exist at the Psychological Methods Group), but does come with an outlook on a permanent position!

PhD positions at the Center for Urban Mental Health

The University of Amsterdam has recently approved the foundation of the first-ever Center for Urban Mental Health, which will form an interdisciplinary approach to tackling common mental health problems from a complexity point of view. This center will at first be housed at the renowned Institute for Advanced Studies, right in the city center of Amsterdam. The Center for Urban Mental Health will be launched with several interdisciplinary PhD projects, all aiming to start early 2020. Below, I will list the most relevant positions that are currently open for applications!

Computational modeling of psychological and social dynamics in urban mental health conditions: the case of addictive substance use.

The first PhD vacancy is a PhD project between the Psychological Methods Group in the Department of Psychology, the Department of Computational Science in the Informatics Institute, and the Institute for Advanced Studies. This project will be supervised by me (Sacha Epskamp) and Michael Lees, and will be hosted at the Psychological Methods group and the Institute for Advanced Studies. The aim of this project is to form computational models (e.g., differential equations, agent based models, Ising models), that combine psychological dynamics with social dynamics. This means that we wish to form models that can simulate intraindividual dynamics of multiple people that also interact with each other in complex ways. We are specifically looking for a candidate with a background in using such models as well as a strong affinity with psychological research.

Please note that the closing date is already this Sunday (November 24). However, if you are interested in doing a PhD on the topic of computational modeling of psychological dynamics, feel free to contact me (Sacha Epskamp) as more positions related to this topic may become available.

Network Analysis and Urban Mental Health Interventions

The second PhD vacancy is a vacancy between the Department of Psychology and the Department of Communication Science. The project will be supervised by me (Sacha Epskamp), alongside Reinout Wiers, Julia van Weert, and Barbara Schouten. This project will start out as a methodological project, investigating ways of estimating network models from self-report time-series data (e.g., experience sampling method; ESM), after which the project will move towards a more applied clinical focus in which the candidate is expected to gather and analyze ESM data, followed by collaborating with clinicians to derive and evaluate intervention plans. We are specifically looking for a candidate that has both a strong focus on clinical research as well as good data analytic skills and an affinity or background in methodology.

Network theory of addiction and depression

The third PhD vacancy is a vacancy between the Psychological Methods Group in the Department of Psychology, the Department of Psychiatry at the Amsterdam UMC, location AMC, and the Institute for Advanced Studies. This project will be supervised by Maarten Marsman, Judy Luigjes, and Ruth van Holst. This project will build upon the view that depression and addictions form a complex system of mutually interacting problems and aims to formalize a network model of the two disorders in an urban context. The project will involve analyzing existing cross-sectional, longitudinal and clinical data, taken from an urban population. Where necessary, new techniques and models will be developed for the analysis of these data. The candidate should have a strong affinity to (clinical) psychological research and formal statistical modelling (e.g., Bayesian hierarchical modelling, or mathematical statistics), and proficiency in programming in at least R.

Even more positions!

If this was not enough, more positions will open up shortly! At the Center for Urban Mental Health! Currently there is a listing on the complexity of fear, depression and addiction among adolescents (in Dutch), and a listing on aging and mental health. And a few more positions are expected to open up. If you are interested in these, I encourage you to check the Urban Mental Health website and/or follow the Center for Urban Mental Health on Twitter. In addition, even more job opportunities may open up in the coming months, depending on various sources of funding. While I cannot go into detail on those, I highly recommend joining our very active Facebook group on Psychological Dynamics, in which both job opportunities on topics related to our research are routinely posted.

Network school & SEM materials + extended registration CCS session

This blog post was written by Sacha Epskamp

I would like to share the following three updates:

1. In January we hosted the annual Psychological Networks Amsterdam Winter School. On popular request, we have now made many of the materials publicly available on the Open Science Framework:

2. I teach two courses on Structural Equation Modeling at the University of Amsterdam, which are closely related and often touch on network modeling as well. Many materials (including video summaries) of the latest course are now available at

3. We have extended the deadline for our event in Singapore on complexities of adverse behavior and mental health ( to June 16! Please share this with anyone you think might be interested!

Complexities of Adverse Behavior – Registration open!

This blog post was written by Sacha Epskamp

I am happy to announce the full day “Complexities of Adverse Behavior” event in Singapore on October 2! We now opened up registration for contributed talks, with a deadline of June 1. For more information and registration, please see our website.

This will be part of the wider Conference on Complex Systems, which is the largest annual international conference on complexity science, bringing together many researchers from different fields of study. I attend this conference every year as I think the larger field of human behavior does not stop at the borders of psychological research (plus it is a really nice conference!) Screenshot 2019-05-07 at 11.41.25

A preview of new features in bootnet 1.1

This blog post was written by Sacha Epskamp.

In the past year, we have published two tutorial papers on the bootnet package for R, which aims to provide an encompassing framework for estimating network structures and checking their accuracy and stability. The first paper, published in Behavior Research Methods, focuses on how to use the bootstrapping methods to assess the accuracy and stability of network structures. The second, published in Psychological Methods, focuses on the most commonly used network model, the Gaussian graphical model (GGM; a network of partial correlations), and discusses further utility of the bootnet package.

With more than a year of development, version 1.1 of the bootnet package marks the most ambitious series of updates to the package to date. The version is now ready for testing on Github, and will soon be released on CRAN. This version includes or fleshes out a total of 7 new default sets, including one default set aimed at time-series analysis, and offers new functionality to the default sets already included in the package. In addition, the package now fully supports directed networks as well as methods resulting in multiple network models, and supports the expected influence centrality measure. Furthermore, the plotting functions in bootnet have been updated and now show more information. Finally, the package includes a new simulation function, replicationSimulator, and greatly improves the functionality of the netSimulator function. With these extensions, bootnet now aims to provide an exhaustive simulation suite for network models in many conditions. This blog post is intended to introduce this new functionality and to encourage testing of the functionality before the package is submitted to CRAN.

The package can be installed using:


which requires developer tools to be installed: Rtools for Windows and Xcode for Mac. Subsequently, the package can be loaded as usual:
The package can be installed using:


Updates to network estimation

Bootnet now contains the following default sets:

default model Description Packages Recently added
“EBICglasso” GGM Regularized GGM estimation using glasso and EBIC Model selection qgraph, glasso
“pcor” GGM Unregularized GGM estimation, possibly thresholded using significance testing or FDR qgraph
“IsingFit” Ising Regularized Ising models using nodewise logistic regressions and EBIC model selection IsingFit, glmnet
“IsingSampler” Ising Unregularized Ising moddel estimation IsingSampler
“huge” GGM Regularized GGM estimation using glasso and EBIC Model selection huge
“adalasso” GGM Regularized GGM estimation using adaptive LASSO and cross-validation parcor
“mgm” Mixed graphical model Regularized mixed graphical model estimation using EBIC or cross-validation mgm Revamped
“relimp” Relative importance networks Relative importance networks, possible using another default for network structure relaimpo Yes
“cor” Correlation networks Correlation networks, possibly thresholded using significance testing or FDR qgraph Yes
“TMFG” Correlation networks & GGM Triangulated Maximally Filtered Graph NetworkToolbox Yes
“LoGo” GGM Local/Global Sparse Inverse Covariance Matrix NetworkToolbox Yes
“ggmModSelect” GGM Unregularized (E) BIC model selection using glasso and stepwise estimation qgraph, glasso Yes
“graphicalVAR” Graphical VAR Regularized estimation of temporal and contemporaneous networks graphicalVAR Yes

As can be seen, several of these are newly added. For example, we can now estimate the two recently added more conservative GGM estimation methods added to qgraph, which I described in an earlier blog post. Taking my favorite dataset as example:


The optimal unregularized network that minimizes BIC (note that tuning defaults to 0 in this case) can be obtained as follows:

net_modSelect <- estimateNetwork(bfi[,1:25], 
              default = "ggmModSelect",
              stepwise = FALSE,
              corMethod = "cor")

It is generally recommended to use the stepwise improvement of edges with stepwise = TRUE, but I set it to FALSE here to make this tutorial easier to follow. Likewise, the thresholded regularized GGM can be obtained using the new threshold argument for the EBICglasso default set:

net_thresh <- estimateNetwork(bfi[,1:25],
              tuning = 0, # EBICglasso sets tuning to 0.5 by default
              default = "EBICglasso",
              threshold = TRUE,
              corMethod = "cor")

As typical, we can plot the network:

Layout <- qgraph::averageLayout(net_modSelect, net_thresh)
plot(net_modSelect, layout = Layout, title = "ggmModSelect")
plot(net_thresh, layout = Layout, title = "Thresholded EBICglasso")


Principal direction and expected influence

One robust finding in psychological literature is that all variables tend to correlate positively after arbitrary rescaling of variables – the positive manifold. Likewise, it is common to expect parameters focusing on conditional associations (e.g., partial correlations) to also be positive after rescaling variables. Negative edges in such a network could indicate (a) violations of the latent variable model and (b) the presence of common cause effects in the data. To this end, bootnet now includes the ‘principalDirection’ argument in many default sets, which takes the first eigenvector of the correlation matrix and multiplies all variables by their corresponding sign. For example:

net_modSelect_rescale <- estimateNetwork(bfi[,1:25], 
              default = "ggmModSelect",
              stepwise = FALSE,
              principalDirection = TRUE)

net_thresh_rescale <- estimateNetwork(bfi[,1:25],
              tuning = 0, 
              default = "EBICglasso",
              threshold = TRUE,
              principalDirection = TRUE)

plot(net_modSelect_rescale, layout = Layout, 
     title = "ggmModSelect")
plot(net_thresh_rescale, layout = Layout, 
     title = "Thresholded EBICglasso")


This makes the edges of an unexpected sign much more profound, at the cost of interpretability (as now the rescaling of some variables has to be taken into account).

One potentially useful centrality index that can be used on graphs mostly showing only positive relationships (and all variables recoded in the same direction) is the newly proposed expected influence measure. Expected influence computes node strength without taking the absolute value of edge-weights. This centrality measure can now be obtained:

    ggmModSelect = net_modSelect_rescale,
    EBICGlasso_thresh = net_thresh_rescale
  ), include = "ExpectedInfluence"


To bootstrap expected influence, it has to be requested from bootnet():

boots <- bootnet(net_thresh_rescale, statistics = "ExpectedInfluence", 
                 nBoots = 1000, nCores = 8, type = "case")

plot(boots, statistics = "ExpectedInfluence") + 
  theme(legend.position = "none")


In addition to expected influence, thanks to the contribution of Alex Christensen, randomized shortest paths betweenness centrality (RSPBC) and hybrid centrality are now also supported, which can be called using statistics = c("rspbc", "hybrid").

Relative importance networks

Relative importance networks can be seen as a re-parameterization of the GGM in which directed edges are used to quantify the (relative) contribution in predictability of each variable on other variables. This can be useful mainly for two reasons: (1) the relative importance measures are very stable, and (2) centrality indices based on relative importance networks have more natural interpretations. For example, in-strength of non-normalized relative importance networks equals \(R^2\). Bootnet has been updated to support directed networks, which allows for support for estimating relative importance networks. The estimation may be slow with over 20 variables though. Using only the first 10 variables of the BFI dataset we can compute the relative importance network as follows:

net_relimp <- estimateNetwork(bfi[,1:10],
              default = "relimp",
              normalize = FALSE)

As the relative importance network can be seen as a re-parameterization of the GGM, it makes sense to first estimate a GGM structure and subsequently impose that structure on the relative importance network estimation. This can be done using the structureDefault argument:

net_relimp2 <- estimateNetwork(bfi[,1:10],
              default = "relimp",
              normalize = FALSE,
              structureDefault = "ggmModSelect",
              stepwise = FALSE # Sent to structureDefault function
Layout <- qgraph::averageLayout(net_relimp, net_relimp2)
plot(net_relimp, layout = Layout, title = "Saturated")
plot(net_relimp2, layout = Layout, title = "Non-saturated")


The difference is hardly visible because edges that are removed in the GGM structure estimation are not likely to contribute a lot of predictive power.

Time-series analysis

The LASSO regularized estimation of graphical vector auto-regression (VAR) models, as implemented in the graphicalVAR package, is now supported in bootnet! For example, we can use the data and codes supplied in the supplementary materials of our recent publication in Clinical Psychological Science to obtain the detrended data object Data. Now we can run the graphical VAR model (I use nLambda = 8 here to speed up computation, but higher values are recommended and are used in the paper):

# Variables to include:
Vars <- c("relaxed","sad","nervous","concentration","tired","rumination",

# Estimate model:
gvar <- estimateNetwork(
  Data, default = "graphicalVAR", vars = Vars,
  tuning = 0, dayvar = "date", nLambda = 8

We can now plot both networks:

Layout <- qgraph::averageLayout(gvar$graph$temporal, 
plot(gvar, graph = "temporal", layout = Layout, 
     title = "Temporal")
plot(gvar, graph = "contemporaneous", layout = Layout, 
     title = "Contemporaneous")


The bootstrap can be performed as usual:

gvar_boot <- bootnet(gvar, nBoots = 100, nCores = 8)

To plot the results, we need to make use of the graph argument:

plot(gvar_boot, graph = "contemporaneous", plot = "interval")


Updates to bootstrapping methods

Splitting edge accuracy and model inclusion

Some of the new default sets (ggmModSelect, TMFG and LoGo) do not rely on regularization techniques to pull estimates to zero. Rather, they first select a set of edges to include, then estimate a parameter value only for the included edges. The default plotting method of edge accuracy, which has been updated to include the means of the bootstraps, will then not accurately reflect the range of parameter values. Making use of arguments plot = "interval" and split0 = TRUE, we can plot quantile intervals only for the times the parameter was not set to zero, in addition to a box indicating how often the parameter was set to zero:

# Agreeableness items only to speed things up:
net_modSelect_A <- estimateNetwork(bfi[,1:5], 
              default = "ggmModSelect",
              stepwise = TRUE,
              corMethod = "cor")

# Bootstrap:
boot_modSelect <- bootnet(net_modSelect_A, nBoots = 100, nCores = 8)

# Plot results:
plot(boot_modSelect, plot = "interval", split0 = TRUE)


This shows that the edge A1 (Am indifferent to the feelings of others) – A5 (Make people feel at ease) was always removed from the network, contradicting the factor model. The transparency of the intervals shows also how often an edge was included. For example, the edge A1 – A4 was almost never included, but when it was included it was estimated to be negative.

Accuracy of directed networks

Accuracy plots of directed networks are now supported:

net_relimp_A <- estimateNetwork(bfi[,1:5],
              default = "relimp",
              normalize = FALSE)
boot_relimp_A <- bootnet(net_relimp_A, nBoots = 100, nCores = 8)
plot(boot_relimp_A, order = "sample")


Simulation suite

The netSimulator function and accompanying plot method have been greatly expanded. The most basic use of the function is to simulate the performance of an estimation method given some network structure:

Sim1 <- netSimulator(
  input = net_modSelect, 
  dataGenerator = ggmGenerator(),
  nCases = c(100,250,500,1000),
  nCores = 8,
  nReps = 100,
  default = "ggmModSelect",
  stepwise = FALSE)



Instead of keeping the network structure fixed, we could also use a function as input argument to generate a different structure every time. The updated genGGM function allows for many such structures (thanks to Mark Brandt!). In addition, we can supply multiple arguments to any estimation argument used to test multiple conditions. For example, perhaps we are interested in investigating if the stepwise model improvement is really needed in 10-node random networks with \(25\%\) sparsity:

Sim2 <- netSimulator(
input = function()bootnet::genGGM(10, p = 0.25, graph = "random"),  
  dataGenerator = ggmGenerator(),
  nCases = c(100,250,500,1000),
  nCores = 8,
  nReps = 100,
  default = "ggmModSelect",
  stepwise = c(FALSE, TRUE))

plot(Sim2, color = "stepwise")


which shows slightly better specificity at higher sample sizes. We might wish to repeat this simulation study for 50% sparse networks:

Sim3 <- netSimulator(
  input = function()bootnet::genGGM(10, p = 0.5, graph = "random"),  
  dataGenerator = ggmGenerator(),
  nCases = c(100,250,500,1000),
  nCores = 8,
  nReps = 100,
  default = "ggmModSelect",
  stepwise = c(FALSE, TRUE))

plot(Sim3, color = "stepwise")


The results object is simply a data frame, meaning we can combine our results easily and use the plot method very flexibly:

Sim2$sparsity <- "sparsity: 0.75"
Sim3$sparsity <- "sparsity: 0.50"
Sim23 <- rbind(Sim2,Sim3)
plot(Sim23, color = "stepwise", yfacet = "sparsity")


Investigating the recovery of centrality (correlation with true centrality) is also possible:

plot(Sim23, color = "stepwise", yfacet = "sparsity",
    yvar = c("strength", "closeness", "betweenness", "ExpectedInfluence"))


Likewise, we can also change the x-axis variable:

Sim23$sparsity2 <- gsub("sparsity: ", "", Sim23$sparsity)
plot(Sim23, color = "stepwise", yfacet = "nCases",
    xvar = "sparsity2", xlab = "Sparsity")


This allows for setting up powerful simulation studies with minimal effort. For example, inspired by the work of Williams and Rast, we could compare such an un-regularized method with a regularized method and significance thresholding:

Sim2b <- netSimulator(
  input = function()bootnet::genGGM(10, p = 0.25, graph = "random"),  
  dataGenerator = ggmGenerator(),
  nCases = c(100,250,500,1000),
  nCores = 8,
  nReps = 100,
  default = "EBICglasso",
  threshold = c(FALSE, TRUE))

Sim2c <- netSimulator(
  input = function()bootnet::genGGM(10, p = 0.25, graph = "random"),  
  dataGenerator = ggmGenerator(),
  nCases = c(100,250,500,1000),
  nCores = 8,
  nReps = 100,
  default = "pcor",
  threshold = "sig",
  alpha = c(0.01, 0.05))

# add a variable:
Sim2$estimator <- paste0("ggmModSelect", 
Sim2b$estimator <- paste0("EBICglasso",
Sim2b$threshold <- NULL
Sim2c$estimator <- paste0("pcor (a = ",Sim2c$alpha,")")

# Combine:
Sim2full <- dplyr::bind_rows(Sim2, Sim2b, Sim2c)

# Plot:
plot(Sim2full, color = "estimator")


Replication simulator

In response to a number of studies aiming to investigate the replicability of network models, I have implemented a function derived from netSimulator that instead simulates two datasets from a network model, treating the second as a replication dataset. This allows researchers to investigate what to expect when aiming to replicate effect. The input and usage is virtually identical to that of netSimulator:

SimRep <- replicationSimulator(
  input = function()bootnet::genGGM(10, p = 0.25, graph = "random"),  
  dataGenerator = ggmGenerator(),
  nCases = c(100,250,500,1000),
  nCores = 8,
  nReps = 100,
  default = "ggmModSelect",
  stepwise = c(FALSE, TRUE))

plot(SimRep, color = "stepwise")


Future developments

As always, I highly welcome bug reports and code suggestions on Github. I would also welcome any volunteers willing to help work in this project. This work can include adding new default sets, but also overhauling the help pages or other work. Please contact me if you are interested!

New features in qgraph 1.5

Written by Sacha Epskamp.

While the developmental version is routinely updated, I update the stable qgraph releases on CRAN less often. The last major update (version 1.4) was 1.5 years ago. After several minor updates since, I have now completed work on a new larger version of the package, version 1.5, which is now available on CRAN. The full list of changes can be read in the NEWS file. In this blog post, I will describe some of the new functionality.

New conservative GGM estimation algorithms

Recently, there has been some debate on the specificity of EBICglasso in exploratory estimation of Gaussian graphical models (GGM). While EBIC selection of regularized glasso networks works well in retrieving network structures at low sample sizes, Donald Williams and Philippe Rast recently showed that specificity can be lower than expected in dense networks with many small edges, leading to an increase in false positives. These false edges are nearly invisible under default qgraph fading options (also due to regularization), and should not influence typical interpretations of these models. However, some lines of research focus on discovering the smallest edges (e.g., bridge symptoms or environmental edges), and there has been increasing concerns regarding the replicability of such small edges. To this end, qgraph 1.5 now includes a warning when a dense network is selected, and includes two new more conservative estimation algorithms: thresholded EBICglasso estimation and unregularized model selection.

Thresholded EBICglasso

Based on recent work by Jankova and Van de Geer (2018), a low false positive rate is guaranteed for off-diagonal (\(i \not= j\)) precision matrix elements (proportional to partial correlation coefficients) \(\kappa_{ij}\) for which:

|\kappa_{ij}| > \frac{\log p (p-1) / 2}{\sqrt{n}}.
The option threshold = TRUE in EBICglasso and qgraph(..., graph = "glass") now employs this thresholding rule by setting edge-weights to zero that are not larger than the threshold in both in the returned final model as well in the EBIC computation of all considered models. Preliminary simulations indicate that with this thresholding rule, high specificity is guaranteed for many cases (an exception is the case in which the true model is not in the glassopath, at very high sample-sizes such as \(N > 10{,}000\)). A benefit of this approach over the unregularized option described above is that edge parameters are still regularized, preventing large visual overrepresentations due to sampling error.

The following codes showcase non-thresholded vs thresholded EBICglasso:

bfiSub <- bfi[,1:25]
g1 <- qgraph(cor_auto(bfiSub), graph = "glasso", sampleSize = nrow(bfi),
             layout = "spring", theme = "colorblind", title = "EBICglasso", 
             cut = 0)
g2 <- qgraph(cor_auto(bfiSub), graph = "glasso", sampleSize = nrow(bfi), 
       threshold = TRUE, layout = g1$layout, theme = "colorblind", 
       title = "Thresholded EBICglasso", cut = 0)


While the thresholded graph is much sparser, that does not mean all removed edges are false positives. Many are likely reflecting true edges.

Unregularized Model Search

While the LASSO has mostly been studied in high-dimensional low-sample cases, in many situations research focuses on relatively low-dimensional (e.g., 20 nodes) settings with high sample size (e.g., \(N > 1{,}000\)). To this end, it is arguable if regularization techniques are really needed. In the particular case of GGMs, one could also use model selection on unregularized models in which some pre-defined edge-weights are set to zero. It has been shown that (extended) Bayesian information criterion (EBIC) selection of such unregularized models selects the true model as \(N\) grows to \(\infty\) (Foygel and Drton, 2010). The new function ggmModSelect now supports model search of unregularized GGM models, using the EBIC exactly as it is computed in the Lavaan package. The hypertuningparameter is set by default to \(0\) (BIC selection) rather than \(0.5\), as preliminary simulations indicate \(\gamma = 0\) shows much better sensitivity while retaining high specificity. By default, ggmModSelect will first run the glasso algorithm for \(100\) different tuning parameters to obtain \(100\) different network structures. Next, the algorithm refits all those networks without regularization and picks the best. Subsequently, the algorithm adds and removes edges until EBIC can no longer be improved. The full algorithm is:

  1. Run glasso to obtain 100 models
  2. Refit all models without regularization
  3. Choose the best according to EBIC
  4. Test all possible models in which one edge is changed (added or removed)
  5. If no edge can be added or changed to improve EBIC, stop here
  6. Change the edge that best improved EBIC, now test all other edges that would have also lead to an increase in EBIC again
  7. If no edge can be added or changed to improve EBIC, go to 4, else, go to 6.

When stepwise = FALSE, steps 4 to 7 are ignored, and when considerPerStep = "all", all edges are considered at every step. The following codes showcase the algorithm:

modSelect_0 <- ggmModSelect(cor_auto(bfiSub), nrow(bfi), gamma = 0, nCores = 8)
modSelect_0.5 <- ggmModSelect(cor_auto(bfiSub), nrow(bfi), gamma = 0.5, nCores = 8)
g3 <- qgraph(modSelect_0$graph, layout = g1$layout, theme = "colorblind", 
       title = "ggmModSelect (gamma = 0)", cut = 0)
g4 <- qgraph(modSelect_0.5$graph, layout = g1$layout, theme = "colorblind", 
       title = "ggmModSelect (gamma = 0.5)", cut = 0)


Note that this algorithm is very slow in higher dimensions (e.g., above 30-40 nodes), in which case only the regular EBICglasso, thresholded EBICglasso, or setting stepwise = FALSE are feasible. Of note, centrality analyses, especially of the more stable strength metric, are hardly impacted by the estimation method:

       ggmModSelect_0 = g3,
       ggmModSelect_0.5 = g4


Both thresholded EBICglasso and ggmModSelect are implemented in the development version of bootnet, which will be updated soon to CRAN as well. Preliminary simulations show that both guarantee high specificity, while losing sensitivity. Using ggmModSelect with \(\gamma = 0\) (BIC selection) shows better sensitivity and works well in detecting small edges, but is slow when coupled with stepwise model search, which may make bootstrapping hard. I encourage researchers to investigate these and competing methods in large-scale simulation studies.

Which estimation method to use?

Both new methods are much more conservative than the EBICglasso, leading to drops in sensitivity and possible misrepresentations of the true sparsity of the network structure. For exploratory hypothesis generation purposes in relatively low sample sizes, the original EBICglassso is likely to be preferred. In higher sample sizes and with a focus point on identifying small edges, the conservative methods may be preferred instead. There are many more GGM estimation procedures available in other R packages, and detailed simulation studies investigating which estimator works best in which case are now being performed in multiple labs. I have also implemented simulation functions in the developmental version of bootnet to aide studying these methods, which I will describe in an upcoming blog post.

Flow diagrams

Sometimes, researchers are interested in the connectivity of one node in particular, which can be hard to see in the Fruchterman-Reingold algorithm, especially when the connections to that one node are weak. The new flow function, which I developed together with Adela Isvoranu, can be used to place nodes in such a way that connections of one node are clearly visible. The function will place the node of interest to the left, then in vertical levels nodes connected to the node of interest with 1, 2, 3, etcetera edges. Edges between nodes in the same level are displayed as curved edges. For example:

flow(g2, "N3", theme = "colorblind", vsize = 4)


Expected influence

The centrality index expected influence is now returned by centrality() and can be plotted using centralityPlot(), although it has to be requested using include. In addition, the plots can now be ordered by one of the indices:

centralityPlot(g2, include = c("Strength","ExpectedInfluence"),
               orderBy = "ExpectedInfluence")


Note, however, that the BFI network is not an optimal network to compute expected influence on, as some variables are (arbitrarily) scored negatively. It is best to compute expected influence on a network in which higher scores on all nodes have the same interpretation (e.g., symptoms in which higher = more severe).

Future developments

As always, I highly welcome bug reports and code suggestions on Github. In addition, I will also update the bootnet package soon and write a separate blog post on its latest additions.


Jankova, J., and van de Geer, S. (2018) Inference for high-dimensional graphical models. In: Handbook of graphical models (editors: Drton, M., Maathuis, M., Lauritzen, S., and Wainwright, M.). CRC Press: Boca Raton, Florida, USA.

Foygel, R., & Drton, M. (2010). Extended Bayesian information criteria for Gaussian graphical models. In Advances in neural information processing systems (pp. 604-612).

Recent developments on the performance of graphical LASSO networks

This post was written by Sacha Epskamp, Gaby Lunansky, PIa Tio and Denny Borsboom, on behalf of the Psychosystems lab group.

Currently, many network models are estimated using LASSO regularization. For ordinal and continuous variables, a popular option is to use the graphical LASSO (GLASSO), in which the network is estimated by estimating a sparse inverse of the variance-covariance matrix. For a description of this method, see our recently published tutorial paper on this topic. Last week we were informed of the new pre-print archive of Donald Williams and Philippe Rast, who show that under certain conditions the GLASSO, coupled with EBIC model selection (EBICglasso), can have lower specificity than expected. Specificity is defined as: 1 – the probability of estimating an edge that is not in the true model. Thus, the results of Williams and Rast indicate that in certain cases edges may be retrieved that are not representative of true edges. This is interesting, as LASSO methods usually show high specificity all-around.

We have investigated these claims and replicated the findings using the independent implementations of EBICglasso in both the qgraph and huge packages. This effect seems to only occur in network structures that are very dense (much denser than typical simulations and proofs regarding GLASSO investigate) and feature many very small edges, blurring the line between true and false effects. We already noted a noteworthy lower specificity than expected in our tutorial paper when discussing tools one may use to investigate these properties for a given network structure, and hypothesized that this may be an effect of a dense network structure with many small edges. The results of Williams and Rast seem to confirm this speculation by taking this effect to the extreme.

What is the problem and how can it be solved?

The main problem lies in that in some cases, the GLASSO algorithm may retrieve very small false edges, which are not present in the true network under which the data are simulated. There are two reasons why this may occur:

  1. For certain tuning parameters, GLASSO identifies the correct model. However, the EBIC or cross-validation selects the wrong tuning parameter.
  2. GLASSO is incapable of selecting the right model for any tuning parameter even with infinite sample size, and as a result any selection procedure is doomed to fail.

Case 1 is interesting, and may be the reason why regularized EBICglasso estimation has high specificity but does not always converge to the true model. This case may lead to a drop in specificity to about 0.6 – 0.7 and seems to occur in dense networks. We hypothesize that the added penalty biases parameters so much EBIC tends to select a slightly denser model to compensate. We are currently working out ways to improve tuning parameter selection, including running more simulations on preferred EBIC hyperparameters in denser networks than usually studied, theoretically derived thresholding rules, and non-regularized model selection procedures.

Case 2 seems to be the case in which Williams and Rast operate, which is harder to tackle. To exemplify this problem, we generated a 20% sparse graph using the codes kindly provided by Williams and Rast, and entered the true implied variance-covariance matrix into glasso. Inverting this true matrix and thresholding at an arbitrary small number returns the correct graph. However, at no tuning parameter the GLASSO algorithm is capable of retrieving this graph:



The codes for this example are available online. It is striking that at no point GLASSO captures the true model, even though the true variance-covariance matrix (corresponding to N = infinite) was used as input. This is not common. Using the exact same codes, an 80% dense network parameterized as according to Tin & Li (2011), shows a strikingly different picture:


showing that the true model is retrieved for some GLASSO tuning parameters. We are currently investigating whether we can identify cases in which GLASSO is incapable of retrieving the correct model at infinite sample size, and hypothesize this may be due to (a) very low partial correlations used in the generating structure, leading to a blurred line between true and false edges, and (b) relatively large partial correlations or implied marginal correlations, leading to inversion problems. We hope anyone may have feedback on why this set of networks fails in GLASSO estimation and how such troublesome cases may be discovered from empirical data.

Why can sparsity impact GLASSO performance?

Many network estimation techniques, including the GLASSO, rely on the assumption of sparsity, also termed the bet on sparsity. That is, these methods work very well assuming the true model is sparse (e.g. contains relatively few edges). However, when the true model is not sparse, LASSO may lead to unexpected results. Most simulation studies and mathematical proofs studied network retrieval in sparse settings. As more psychological network studies are published, however, one consistent finding seems to be that psychological networks are less sparse than expected. At high sample sizes, most studies identify many small edges near zero. This finding led to a change in the qgraph default for the lower bound on the sparsity of networks compared (lambda.min.ratio), which when set to 0.1 leads to networks being estimated that are too sparse. The default is now 0.01, which leads to better sensitivity but also a small decrease in specificity. Williams and Rast set this value to 0.001 in some cases, which should mark again an increase in sensitivity but perhaps also a decrease in specificity.

Are graphical LASSO networks unstable?

Fortunately, even in the extreme cases discussed by Williams and Rast, low specificity arises exclusively from the fact that very weak edges might be incorrectly included in the graph. In a qgraph plot, for instance, these would be nearly invisible. As a result, the general structure or substantive interpretation of the network is not affected. GLASSO results, like other statistical methods, are not intrinsically stable or unstable, and follow up stability and accuracy checks should always be performed after fitting a network. If the estimated network is stable and accurate, there is no reason to doubt the interpretation based on the work of Williams and Rast, except when the interpretation relied on thorough investigation of the weakest edges in the network (possibly made visible by removing the fading of edges in displaying them). Of note, the bootstrapping methods also allow for checking if an edge is significantly different from zero. We argue against this procedure, as it would lead to double thresholding a network and a severe loss of sensitivity. But if the goal is to retrieve small edges, it may be warranted to perform the check if zero is contained in the confidence interval of the regularized edge parameter.

Does this affect the replicability of networks?

Replicability and model selection are related but do not refer to the same methodological criterium. The recent findings regarding the GLASSO algorithm appeal to model quality: when simulating data from a ‘true model’, we can check whether results from an estimation method converge to this true model with increasing sample size (Borsboom et al., in press). Under certain circumstances, GLASSO algorithm retrieves very small edges which are not present in the true model under which the data are simulated (e.g. the edges are false-positives).  Replicability concerns finding the same results in different empirical samples. Naturally, if the GLASSO algorithm indeed retrieves small false-positive edges, this will affect replicability of these small edges over different empirical samples. However, since these edges shouldn’t be there in the first place, the origin of the issue does not concern replicability but model selection. While we investigate these results further, an important take-home message of the work of Williams and Rast is that researchers who aim to use network models to identify very small effects (when it is plausible to assume that the ‘true model’ might be dense with many small edges), may wish to consider other estimation methods such as node-wise estimation or non-regularized model selection discussed below.

Are all networks estimated using the graphical LASSO?

No. GLASSO is only one of many options that are available to estimate undirected network models from data. GLASSO is often used because it (a) is fast, (b) has high power in detecting edges, (c) selects the entire model in one step, and (d) only requires a covariance matrix as input, allowing to compute networks from polychoric correlations (ordinal data) and facilitating reproducibility of results. Outside the GLASSO, another option for regularized network estimation is to perform (logistic) regressions of every variable on all other variables, and combining the resulting regression parameters (proportional to edge weights) in the network. This is called nodewise estimation, and is at the core of several often used estimation methods such as the adaptive LASSO, IsingFit, and mixed graphical models (MGM). While nodewise regression estimation has less power than GLASSO, there are cases in which GLASSO fails but nodewise regressions do not (Ravikumar et al., 2008). Our preliminary simulations showed that nodewise estimation methods retain high specificity in the particular case identified by Williams and Rast where the specificity of GLASSO drops.  In addition to LASSO estimation methods that aim to retrieve sparse models, other regularization techniques exist that instead aim to estimate low-rank dense networks. Still another option is to use elastic-net estimation, which compromises between estimating sparse and dense networks.

Of note, and as Williams and Rast describe clearly, networks can also be estimated using non-regularized methods. Two main methods by which this can be done are (a) estimating a saturated network and subsequently removing edges that are below some threshold (e.g., based on significance tests, as argued by Williams and Rast), and (b) selecting between several estimated models in which edge-parameters are estimated without regularization. Both have already been discussed in the blog post that also introduced GLASSO estimation in the qgraph package. By using the ‘threshold’ argument in qgraph and bootnet one can threshold edges based on significance (possibly after correcting for multiple tests) or false discovery rates. A downside of thresholding, however, is that no model selection is performed, which leads to estimated edges that do not take into account other edges that are set to zero. A (rather slow) step-wise model selection procedure has been implemented in qgraph in the function ‘findGraph’. We are currently working on a faster non-regularized model selection procedure, in which the GLASSO algorithm is first used to generate a limited set of models (e.g., 100), which are subsequently refitted without regularization. Preliminary simulations show that this procedure is more conservative than regularized GLASSO and often converges to the true model with high sample sizes.

The bootnet package contains many default sets to facilitate computing networks using different estimation procedures in a unified framework. The table below provides an overview of methods currently implemented as default sets and when they can be used. If you would like us to implement more default sets, please let us know!

Network table


Borsboom, D., Robinaugh, D. J.,  The Psychosystems Group, Rhemtulla M. & Cramer, A. O. J. (in press). Robustness and replicability of psychopathology networks. World Psychiatry.

Foygel, R., & Drton, M. (2010). Extended Bayesian information criteria for Gaussian graphical models. Advances in neural information processing systems, 604-612.

Ravikumar, P., Raskutti, G., Wainwright, M. J.,  & Yu, B. (2008). High-dimensional covariance estimation by minimizing l1-penalized log-determinant divergence. Electronic Journal of Statistics, 5:935–980.

Yin, J., & Li, H. (2011). A sparse conditional gaussian graphical model for analysis of genetical genomics data. The annals of applied statistics, 5(4), 2630-2650.