Some notes on the Journal of Abnormal Psychology special issue on the reproducibility of network analysis

By Denny Borsboom, Eiko Fried, Lourens Waldorp, Claudia van Borkulo, Han van der Maas, Angélique Cramer, Don Robinaugh, and Sacha Epskamp

The Journal of Abnormal Psychology has just published a paper by Forbes, Wright, Markon, and Krueger (2017a) entitled “Evidence that psychopathology networks have limited replicability”, along with two commentaries and a rejoinder. One of the commentaries is from our group. In it, we identify several major statistical errors and methodological problems in Forbes et al.’s paper (Borsboom, Fried, Epskamp, Waldorp, van Borkulo, van der Maas, & Cramer, 2017). Also, we show that network models replicate very well when properly implemented (the correlation between network parameters in the original and replication sample exceeds .90 for Ising models and relative importance networks). In fact, a formal statistical test shows that the hypothesis, that the Ising networks in both samples are precisely equal, cannot be rejected despite high statistical power.

We are strongly committed to open science and would have ideally shared all of the data and code, so that readers could make up their own minds on whether or not networks replicate. Unfortunately, however, both Forbes et al.’s analyses and our own are not reproducible, because the replication dataset used by Forbes et al. is not openly accessible. Therefore, we can share both the analysis code we used and the original dataset, but not the replication dataset.

In their rejoinder, Forbes et al. (2017b) do not debate the major points made in our commentary: they accept (a) that the relative importance networks they reported were incorrectly implemented; (b) that the data they used are unsuited due the fact that the correlation matrix is distorted by the skip structure in the interview schedule; and (c) that it is incorrect to assume different kinds of techniques should converge on the same network, as their analysis presupposed. In addition, Forbes et al. (2017b) now acknowledge that in the original paper that we were invited to comment on, which was widely shared by the authors and which we adressed in a previous blogpost on this website, the reported directed acyclic graphs were also incorrectly implemented.

Thus, relative to their evidence base at the time of writing their original publication, two out of three of Forbes et al.’s network models proved to be implemented incorrectly, and the correlations among the variables they studied turned out to be affected so strongly by the interview’s skip structure that no firm conclusions could be drawn from them.

Despite these facts, and even though virtually all the ‘evidence’ they originally based their conclusion on has dissolved, in their rejoinder Forbes et al. (2017b) repeat the claim that network models do not replicate.

Their line of argument is no longer based on their results from the target paper, but instead rests on two new pillars.

First, Forbes et al. (2017b) present a literature review of network papers related to post-traumatic stress (PTSD), which are argued to report different networks, and suggest that this proves that networks do not replicate. With regard to this line of argument, we note that there are many reasons that network structures can differ; for instance, because the networks really are different (e.g., the relevant PTSD networks were estimated on different samples, with different kinds of trauma, using different methodologies) or because of sampling error (e.g., due to small sample sizes). We agree that there is heterogeneity in the results of PTSD network analysis, which in 2015 inspired a large collaborative effort to investigate the replicability of PTSD network structures in a paper that is forthcoming in the journal Clinical Psychological Science. The question how these differences originate is important, but that there are differences between networks does not provide evidence for the thesis that network models are “plagued with substantial flaws”, as Forbes et al. conclude, for the same reason that finding different factor models for PTSD across samples (Armour, Müllerová, & Elhai, 2015) does not imply that there is something wrong with factor analysis. Network methodology is evaluated on methodological grounds, using mathematical analysis and simulations, not on the basis of whether it shows different results in different populations.

Second, Forbes et al. (2017b) base their conclusions on a technique presented by the other team of commentators (Steinley, Hoffman, Brusco, & Sher, 2017), who purport to evaluate the statistical significance of connection parameters in the network. This is done by evaluating these parameters relative to a sampling distribution constructed by keeping the margins of the contingency table fixed (i.e., controlling for item and person effects). The test in question is not new; it has been known in psychometrics for several years as a way to investigate whether items adhere to a unidimensional latent variable model called the Rasch model (Verhelst, 2008). While we welcome new techniques to assess network models, we know from standing mathematical theory that any network connections that produce data consistent with the Rasch model will not be picked up by this testing procedure. This is problematic because we also know that fully connected networks with uniform connection weights will produce precisely such data (Epskamp, Maris, Waldorp, & Borsboom, in press). As a consequence, the proposed test is unreliable and cannot be used to evaluate the robustness of network models. People in our team are now preparing a full methodological evaluation of Steinley et al.’s (2017) technique and will post the results soon.

In the conclusion of their rebuttal, Forbes et al. (2017b) call for methodologically rigorous testing of hypotheses that arise from network theory and analysis. We echo this conclusion. It is critical that we continue the hard work of advancing our analytic methods, forming causal hypotheses about the processes that generate and maintain mental disorders, and testing these hypotheses in dedicated studies. In addition, we think it is important that additional replication studies are being executed to examine to what extent network models replicate; interested researchers can use our analysis code as a template to implement relevant models in different populations and compare the results.

References

Armour, C., Műllerová, J., & Elhai, J. D. (2015). A systematic literature review of PTSD’s latent structure in the Diagnostic and Statistical Manual of Mental Disorders: DSM-IV to DSM-5. Clinical Psychology Review, 44, 60–74. http://doi.org/10.1016/j.cpr.2015.12.003

Borsboom, D., Fried, E. I., Epskamp, S., Waldorp, L. J., van Borkulo, C. D., van der Maas, H. L. J., & Cramer, A. O. J. (2017). False alarm? A comprehensive reanalysis of “Evidence that psychopathology symptom networks have limited replicability” by Forbes, Wright, Markon, and Krueger. Journal of Abnormal Psychology.

Epskamp, S., & Fried, E. I. (in press). A Tutorial on Regularized Partial Correlation Networks. Psychological Methods. Preprint: http://arxiv.org/abs/1607.01367

Epskamp, S., Maris, G., Waldorp, L. J., & Borsboom, D. (in press). Network psychometrics. To appear in: Irwing, P., Hughes, D., & Booth, T. (Eds.), Handbook of Psychometrics. New York: Wiley.

Epskamp, S., Waldorp, L. J., Mõttus, R., & Borsboom, D. (submitted). Discovering Psychological Dynamics: The Gaussian Graphical Model in Cross-sectional and Time-series Data. Preprint: https://arxiv.org/abs/1609.04156

Forbes, M., Wright, A., Markon, K., & Krueger, R. (2017a).  Evidence that psychopathology symptom networks have limited replicability. Journal of Abnormal Psychology.

Forbes, M., Wright, A., Markon, K., & Krueger, R. (2017b).  Further evidence that psychopathology symptom networks have limited replicability and utility: Response to Borsboom et al. and Steinley et al. Journal of Abnormal Psychology. To our knowledge, the preprint for this paper is not currently available online.

Steinley, D., Hoffman, M., Brusco, M. J., & Sher, K. J. (2017). A method for making inferences in network analysis: A comment on Forbes, Wright, Markon, & Krueger (2017). Journal of Abnormal Psychology. To our knowledge, the preprint for this paper is not currently available online.

Verhelst, N. D. (2008). An Efficient MCMC Algorithm to Sample Binary Matrices with Fixed Marginals. Psychometrika, 73, 705–728. https://doi.org/10.1007/s11336-008-9062-3

 

Psychopathology networks replicate with stunning precision

By Denny Borsboom, Eiko Fried, Sacha Epskamp, Lourens Waldorp, Claudia van Borkulo, Han van der Maas, and Angélique Cramer

Update: The pre-print of our official commentary is now online at PsyArXiv.

In a forthcoming paper in the Journal of Abnormal Psychology entitled “Evidence that psychopathology networks do not replicate”, Miriam Forbes, Aidan Wright, Kristian Markon, and Robert Krueger purport to show that network structures do not replicate across datasets. They estimate networks for symptoms of Major Depressive Episode (MDE) and Generalized Anxiety Disorder (GAD) in two large datasets – one from the National Comorbidity Survey-Replication (NCS) and one from the Australian National Survey of Mental Health and Well-Being (NSMHWB). As is evident from our published work (Fried & Cramer, in press; Fried, van Borkulo, Cramer, Boschloo, Schoevers, & Borsboom, 2017; Epskamp, Borsboom, & Fried, 2017) we see the reproducibility of network research as a top priority, and are happy to see that researchers are investigating this important issue.

The conclusion proposed by the authors is that “network analysis […] had poor replicability between […] samples”. This conclusion is not supported by the data for state-of-the-art network models, as we will argue in a commentary solicited by the Journal of Abnormal Psychology. Given the intense interest in the matter, however, we deemed it useful to post a short blog post in advance to state a fact that may not be obvious to most readers given the rhetorical style of Forbes et al. (in press). In one sentence: state-of-the-art networks don’t just replicate – they replicate with stunning precision.

Unfortunately, the authors of the paper did not share their data or code with us yet, so we cannot fully evaluate their work, but they did share the parameter matrices that they got out of the analysis, and these are sufficient to establish that the authors’ conclusion does not apply to state-of-the-art network modeling techniques (e.g., networks estimated using our R-package IsingFit; Van Borkulo et al., 2014). The authors of the paper suggest as much when they say that “[t]he replicability of the edges in the Ising models was remarkably similar between and within samples”, but this conclusion is easily lost in the rhetoric of the paper’s title, abstract, and discussion. In addition, the authors insufficiently articulate just how similar the networks are; as a result, users of our techniques may be wondering whether Ising networks really live up to their reputation as highly stable and secure network estimation techniques.

BlogPlotFigure 1. The networks estimated from the NCS and NSMHWB samples, and the scatterplot of network parameters as estimated in both samples (r=.95). The network representations use the default settings of qgraph (Epskamp, Cramer, Waldorp, Schmittmann, & Borsboom, 2011) and a common (average) layout to optimize the comparison between datasets.

So just how replicable are Ising networks? Figure 1 pictures the situation quite clearly: the IsingFit networks are almost indistinguishable, and the network parameters display a whopping correlation of .95 for network edges and .93 for node thresholds across samples (Spearman correlations equal .88 and .85, respectively). Even centrality indices, which we usually approach with considerable caution due to their sensitivity to sampling variation (Epskamp, Borsboom, & Fried, 2017), show surprisingly good replication performance with correlations of .94 (strength), .94 (betweenness), and .76 (closeness).

Nobody in our group had in fact expected such an accurate replication across two entirely distinct samples. As such, we argue that the authors’ conclusion that “the unique utility of network analysis …seems limited to visualizing complex multivariate relationships…” is unwarranted. Given our re-analysis of the results of the Forbes et al. (in press) paper one will wonder: how on earth can the authors of the paper interpret this result as “evidence that psychopathology networks do not replicate”? Well, if you want to find that out, keep an eye out for the upcoming issue of the Journal of Abnormal Psychology, in which we will provide a comprehensive dissection of their methodology and argumentation. We’ll keep you posted!

 

References

Epskamp, S., Borsboom, D. & Fried, E.I. (2017). Estimating psychological networks and their accuracy: a tutorial paper. Behavior Research Methods. doi:10.3758/s13428-017-0862-1

Epskamp, S., Cramer, A. O. J., Waldorp, L. J., Schmittmann, V. D., & Borsboom, D. (2012). qgraph: Network visualizations of relationships in psychometric data. Journal of Statistical Software, 48, 1-18.

Forbes, Wright, Markon, and Krueger (in press).  Evidence that psychopathology symptom networks do not replicate. Journal of Abnormal Psychology.

Fried, E. I. & Cramer, A. O. J. (in press). Moving forward: challenges and directions for psychopathological network theory and methodology. Perspectives on Psychological Science.

Fried, E. I.*, van Borkulo, C. D.*, Cramer, A. O. J., Lynn, B., Schoevers, R. A., Borsboom, D. (2016). Mental disorders as networks of problems: a review of recent insights. Social Psychiatry and Psychiatric Epidemiology, 52, 1-10.

Van Borkulo, C.D., Borsboom, D., Epskamp, S., Blanken, T.F., Boschloo, L., Schoevers, R.A. & Waldorp, L.J. (2014). A new method for constructing networks from binary data. Scientific Reports, 4: 5918. doi: 10.1038/srep05918

 

 

 

ERC Consolidator grant for the Psychosystems Project

The European Research Council (ERC) has awarded a consolidator grant to Denny Borsboom to support the psychosystems project. The project, which is entitled “Psychosystems: Consolidating Network Approaches to Psychopathology”, is designed to further develop the theory and methodology of networks for mental disorders. The ERC will support the project for five years, allowing us to launch a number of new postdoc and PhD projects.

Four new Ph. D. projects on networks

We are happy to welcome four new Ph.D. researchers into the network community:

  • Marie Deserno will investigate determinants of success and problems in Autism, in the context of a project organized by the Academische Werkplaats Autisme. The project is supervised by Hilde Geurts, Sander Begeer, and Denny Borsboom.
  • Riet van Bork will investigate methodologies suited to distinguish between network and latent variable explanations of symptom covariation. The project is funded partly by a European Commission Career Integration Grant awarded to Mijke Rhemtulla, who will supervise the project together with Denny Borsboom.
  • Jonas Dalege will develop network models for the structure and dynamics of attitudes in a project that involves a collaboration between the Psychological Methods and Social Psychology groups at the University of Amsterdam. The project is supervised by Frenk van Harreveld, Han van der Maas, and Denny Borsboom.
  • Merijn Mestdagh will develop time series models for nonlinear networks that involve large number of variables at the K.U. Leuven in a Ph. D. project awarded by the Flemish Organization for Scientific Research (FWO). The project is supervised by Francis Tuerlinckx and Denny Borsboom.

Three new NWO-Veni projects on networks

A very successful round of 2014 NWO grant applications for network researchers!

Our own Angélique Cramer received a Veni grant to further develop network methodology and apply that to mood disorders:

  • Network psychometrics: methods for exposing the architecture and dynamics of mood disorders
    Dr A.O.J. (Angélique) Cramer (f), University of Amsterdam – Psychometrics
    In the network approach a psychological disorder is a consequence of interactions between symptoms (gloomy and fretting). In this proposal methods to construct and analyse these networks will be further developed. Subsequently the networks of patients with mood disorders will be investigated: for example, can we predict when the patient will experience a relapse?

Hanneke Wigman of Groningen University received a Veni grant to investigate symptom networks in the etiology of psychosis:

  • Networks of symptoms in psychosis
    Dr J.T.W. (Hanneke) Wigman (f), University Medical Centre Groningen – Psychiatry
    The researcher will study the development of psychoses (a lost sense of reality) by describing networks of early symptoms and by examining per person how these symptoms influence each other. This will help to develop personalised treatment recommendations.

Annemarie Kalis of Utrecht University received a Veni grant to investigate the role of mental content in empirical psychological research, one subproject of which is devoted to network models:

  • A role for content and mental causation in empirical psychology
    Dr A. Kalis (f), Utrecht University – Philosophy
    Many mental states have content: our convictions, desires, intentions ‘are all about something’. Empirical psychology, however, scarcely offers space for the notion of content. This project will develop a philosophical analysis that shows how mental content can and should play a role in in empirical psychological research.