Objectivity in psychotherapy research: Do the numbers speak for themselves?

9th Munich-Sydney-Tilburg Conference “Evidence, Inference, and Risk”
31 March-2 April 2016

 

Objectivity in psychotherapy research: Do the numbers speak for themselves?

Short Abstract
Psychotherapy research is characterized by a quest for evidence-based treatment. Systematic numerical comparison by means of randomized controlled trials is held as objective methodology resulting in evidence on efficacy of psychological treatments. In this pursuit, numbers are taken as speaking for themselves. However, I argue that the assumed procedural objectivity does not yield objectivity of evidence resulting from the method of choice. This discussion I base on the analysis of a clinical case example from our own mixed method psychotherapy research, in which the numbers could speak for themselves, yet the conclusion does not at all.

Extended abstract
Contemporary psychotherapy research is characterized by its pursuit of evidence-based treatment (EBT)[1]. Evidence is primarily defined as the result of Randomized Controlled Trials (RCTs) that show a significant decrease of psychopathological symptoms due to a course of specified therapy (see canonical papers [2]and [3]). That is, efficacy of treatments is stated by means of rigorous randomized comparison and statistical significance- or null hypothesis testing. In this ‘golden standard’ approach of efficacy research, two premises are found:

  1. numbers are assumed to be representative for levels of psychopathology (either on group- or individual level)
  2. the difference in aggregated pre- and post numbers (i.e. symptom levels prior to and following treatment) is assumed to be representative for treatment efficacy over time.

This quantitative approach gained popularity since the 1950’s [cf. 1, 4], after the publication of condemnatory reports on the actual efficacy of treatments offered by a field known for dogmatism and theoretical bias. Numbers on the other hand were seen as speaking for themselves, as they could function as value-free representations of observable behaviour. Therefore, in order for psychology to be a science proper, numerical methodology was advocated as a means to study human values and experiences as theobject of science without relying on those in the method itself. Although nowadays no researcher would insist that such representations are entirely free of values and representational issues, numerical scrutiny is still increasingly advocated as the safest bet towards a rigorous scientific pursuit [cf. 5].

In this pursuit, the idea that numbers offer the most neutral and therefore the best approach to study psychotherapy efficacy is evidently a non-empirical value [6]. The objectivity claim underlying this methodological pursuit thus could not be called ‘detached objectivity’ – which Heather Douglas defines as referring to knowledge claims that are established on the basis of evidence, rather than on the basis of values that have had no empirical support – despite claims of that nature by early proponents of the numerical method [cf. 7, 8]. Rather, the claim in this pursuit could be typified as ‘procedural objectivity’, regarding the assumption that a non-erroneous process of knowledge generation results in objective knowledge as its product [6]. In psychotherapy research it is indeed assumed that if an RCT is conducted properly, the findings represent the efficacy of treatments objectively. That is, by following proper numerical methods for ‘objective analysis’, the evidence speaks for itself.

However, in this paper I will question whether the interpretation of the product in the way required by the RCT process is justified – which may provoke doubts on the validity of the assumption underlying the methodology itself in return. This discussion I will base on a case from the psychotherapy research conducted at our department, in which patients in a clinical setting are followed routinely throughout treatment using validated quantitative as well as qualitative measures. As this patient’s numbers tell the exact story that is anticipated in efficacy research, the case seems a researcher’s dream. In psychotherapy research, numbers are used to operationalize presence of psychopathology, by means of frequencies or ranges of symptoms. Symptoms are ‘observed’ using pre-fixed quantified checklists or numerically ordered questionnaires, then combined into sum scores and compared via RCTs to populations’ means and/or probability distributions. The evidence resulting from this methodology thus is found on the basis of numerical questionnaires, which are themselves validated statistically [9].

Problematic though is that all numbers in the approach, from the most preliminary symptom level scores up to intricate aggregated statistical test results, rely on the assumption that numerical questionnaires are representative for the object of science, namely human behaviour. The case discussed in this paper repeatedly completed a questionnaire focused at a specific symptom of depression over a course of therapy. Whereas the numbers in such a questionnaire are thought to be representative of the symptom and therefore indicative of a state of wellbeing, interview data revealed that subjectively, the patient understood the numerical questions radically different than was intended. In narrative information of the treatment sessions the wellbeing of the patient appeared highly alarming rather than improved. Consequently, this case is not conclusive about the validity of the first assumption noted above, namely that numbers are representative for levels of psychopathology, as the narrative information tells a diametrically opposed story.

The second assumption posed in the procedural objectivity claim regards the idea that aggregated pre- and post difference numbers in RCTs are indicative for a process of cure. ‘Cure’ is operationalized as a reduction of numerical symptom levels due to a course of therapy. Consequently, efficacy is concluded upon if ‘the numbers’ show a significant decrease of scores over time. Graphically, a diagram showing a trend line with a negative slope is seen as good, whereas a less negative, a flat or a positive slope would indicate inefficacy of treatment. As the scores of our case show the ‘good’ trend line, the results would have a fairly positive influence on the aggregated test of significance, which is conducted in a statistical sequence up from such individual pre-post difference scores. Paradoxical though is that this pre-post difference number is taken as the unproblematic starting point for calculating overall efficacy, whereas it is in fact derived from highly ambiguous individual data. Evidently, the interpretation of the product as required by the RCT methodology would not yield a valid conclusion of efficacy in this case.

In short, if assumed that the numbers are representative for psychopathology and the method is representative for the process of change the case’s numbers would seem to speak in favour of the anticipated efficacy, yet the explanation of the numbers tells a diametrically opposed story. Therefore, the assumption that the numbers speak for themselves as a result of an objective process of study appears to be an idle dream in this case. Whereas the claim of procedural objectivity in psychotherapy research affirms a belief in the validity of ‘the evidence’, this case illustrates a lack of self-evidence of numbers found in efficacy research. Given that the EBT movement is increasingly influential towards global clinical practice, taking the numbers for granted by means of assumed methodological objectivity might eventually harm the primary goal of aid in psychology as a clinical field. Therefore, in this paper I will opt for the seemingly paradoxical methodological conclusion that objectivity in psychotherapy research would benefit from a more subjective approach, in which both aggregated numerical and individualized explanatory data are used to derive a clinically valid stamp of approval on the efficacy of psychotherapies.

 

 

References

[1] Wampold, B. E. (2001). The great psychotherapy debate. Models, methods and findings. New York: Routledge.

[2] Chambless, D. L., & Hollon, S. D. (1998). Defining empirically supported therapies. Journal

of Consulting and Clinical Psychology, 66, 7–18.

[3] Chambless, D. L., & Ollendick, T. H. (2001). Empirically supported psychological interventions: Controversies and evidence. Annual Review of Psychology, 52, 685-716.

[4] Danziger, K. (1990). Constructing the subject. Historical origins of psychological research. Cambridge: Cambridge University Press.

[5] Stiles, W. B. (2006). Numbers can be enriching. New Ideas in Psychology, 24, 252–262.

[6] Douglas, H. E. (2009). Science, policy, and the value-free ideal. Pittsburgh: University of Pittsburgh press.

[7] Eysenck, H. J. (1952). The effects of psychotherapy: An evaluation. Journal of Consulting Psychology, 16, 319–324.

[8] De Groot, A. (1994). Methodologie. Grondslagen van onderzoek en denken in de gedragswetenschappen. Assen: Van Gorcum.

[9] Cronbach, L., & Meehl, P. (1955). Construct validity in psychological tests. Psychological Bulletin, 52, 281-302.

 

 

All have won what? On the epistemic value of RCT based evidence in Evidence Based Treatments

Society for Psychotherapy Research (SPR)
24-26 September 2015, Klagenfurt, Austria

All have won what? On the epistemic value of RCT based evidence in Evidence Based Treatments

Abstract
Aim “All are effective, and all must have prizes,” – so the dodo bird verdicts the statistical indifference between ‘evidence based’ types of psychotherapy. Evidence is understood as the result of randomized controlled (RCT) research. This ‘golden standard’ design requires samples that are homogeneous with regard to symptom(s) – following from the statistical assumptions in central tendency statistics. Symptom specific measures are used to select patients scoring above cut-off, resulting in samples characterized by tight and simple symptom patterns. Whereas the external validity of this procedure is heavily critiqued, I study the internal validity of a priori methodological assumptions in RCT efficacy research. In this paper, I address the clinical plausibility of sample selection by a quantified symptom measure for depression, the BDI-II, which was found to be a reliable screening measure for Major Depressive Disorder.

Method In a pilot study on our mixed method psychotherapy data from depressed adults, we used the BDI-II to select patients eligible to participate in a strict RCT design. For both the eligible and non-eligible sample, we scrutinized individual symptom patterns.

ResultsAnalysis showed a 60-40 ratio of depressed patients to be included or excluded, respectively. 75% of the ‘eligible’ sample still showed comorbidity, which would have been reason to exclude them from a RCT.

Discussion The extraordinary large exclusion percentage indeed inserts doubt on external and ecological validity of quantified eligibility screening in RCT efficacy research. More striking however is that a supposedly ‘eligible sample’ still harms the assumption of homogeneity. In this paper, I propose such invalidity of a priori methodological assumptions as an explanation for the dodo bird effect. I discuss the implications of these preliminary empirical results for the understanding of ‘evidence’ from RCTs and the epistemic value of Evidence Based Treatments.

 

 

Objectivity in psychotherapy research: Do the numbers speak for themselves?

Society for Psychotherapy Research (SPR)
24-26 September 2015, Klagenfurt, Austria

Objectivity in psychotherapy research: Do the numbers speak for themselves?

Short Abstract
Psychotherapy research is characterized by a quest for evidence-based treatment. Systematic numerical comparison by means of randomized controlled trials is held as objective methodology resulting in evidence on efficacy of psychological treatments. In this pursuit, numbers are taken as speaking for themselves. However, I argue that the assumed procedural objectivity does not yield objectivity of evidence resulting from the method of choice. This discussion I base on the analysis of a clinical case example from our own mixed method psychotherapy research, in which the numbers could speak for themselves, yet the conclusion does not at all.

Extended abstract
Contemporary psychotherapy research is characterized by its pursuit of evidence-based treatment (EBT)[1]. Evidence is primarily defined as the result of Randomized Controlled Trials (RCTs) that show a significant decrease of psychopathological symptoms due to a course of specified therapy (see canonical papers [2]and [3]). That is, efficacy of treatments is stated by means of rigorous randomized comparison and statistical significance- or null hypothesis testing. In this ‘golden standard’ approach of efficacy research, two premises are found:

  1. numbers are assumed to be representative for levels of psychopathology (either on group- or individual level)
  2. the difference in aggregated pre- and post numbers (i.e. symptom levels prior to and following treatment) is assumed to be representative for treatment efficacy over time.

This quantitative approach gained popularity since the 1950’s [cf. 1, 4], after the publication of condemnatory reports on the actual efficacy of treatments offered by a field known for dogmatism and theoretical bias. Numbers on the other hand were seen as speaking for themselves, as they could function as value-free representations of observable behaviour. Therefore, in order for psychology to be a science proper, numerical methodology was advocated as a means to study human values and experiences as theobject of science without relying on those in the method itself. Although nowadays no researcher would insist that such representations are entirely free of values and representational issues, numerical scrutiny is still increasingly advocated as the safest bet towards a rigorous scientific pursuit [cf. 5].

In this pursuit, the idea that numbers offer the most neutral and therefore the best approach to study psychotherapy efficacy is evidently a non-empirical value [6]. The objectivity claim underlying this methodological pursuit thus could not be called ‘detached objectivity’ – which Heather Douglas defines as referring to knowledge claims that are established on the basis of evidence, rather than on the basis of values that have had no empirical support – despite claims of that nature by early proponents of the numerical method [cf. 7, 8]. Rather, the claim in this pursuit could be typified as ‘procedural objectivity’, regarding the assumption that a non-erroneous process of knowledge generation results in objective knowledge as its product [6]. In psychotherapy research it is indeed assumed that if an RCT is conducted properly, the findings represent the efficacy of treatments objectively. That is, by following proper numerical methods for ‘objective analysis’, the evidence speaks for itself.

However, in this paper I will question whether the interpretation of the product in the way required by the RCT process is justified – which may provoke doubts on the validity of the assumption underlying the methodology itself in return. This discussion I will base on a case from the psychotherapy research conducted at our department, in which patients in a clinical setting are followed routinely throughout treatment using validated quantitative as well as qualitative measures. As this patient’s numbers tell the exact story that is anticipated in efficacy research, the case seems a researcher’s dream. In psychotherapy research, numbers are used to operationalize presence of psychopathology, by means of frequencies or ranges of symptoms. Symptoms are ‘observed’ using pre-fixed quantified checklists or numerically ordered questionnaires, then combined into sum scores and compared via RCTs to populations’ means and/or probability distributions. The evidence resulting from this methodology thus is found on the basis of numerical questionnaires, which are themselves validated statistically [9].

Problematic though is that all numbers in the approach, from the most preliminary symptom level scores up to intricate aggregated statistical test results, rely on the assumption that numerical questionnaires are representative for the object of science, namely human behaviour. The case discussed in this paper repeatedly completed a questionnaire focused at a specific symptom of depression over a course of therapy. Whereas the numbers in such a questionnaire are thought to be representative of the symptom and therefore indicative of a state of wellbeing, interview data revealed that subjectively, the patient understood the numerical questions radically different than was intended. In narrative information of the treatment sessions the wellbeing of the patient appeared highly alarming rather than improved. Consequently, this case is not conclusive about the validity of the first assumption noted above, namely that numbers are representative for levels of psychopathology, as the narrative information tells a diametrically opposed story.

The second assumption posed in the procedural objectivity claim regards the idea that aggregated pre- and post difference numbers in RCTs are indicative for a process of cure. ‘Cure’ is operationalized as a reduction of numerical symptom levels due to a course of therapy. Consequently, efficacy is concluded upon if ‘the numbers’ show a significant decrease of scores over time. Graphically, a diagram showing a trend line with a negative slope is seen as good, whereas a less negative, a flat or a positive slope would indicate inefficacy of treatment. As the scores of our case show the ‘good’ trend line, the results would have a fairly positive influence on the aggregated test of significance, which is conducted in a statistical sequence up from such individual pre-post difference scores. Paradoxical though is that this pre-post difference number is taken as the unproblematic starting point for calculating overall efficacy, whereas it is in fact derived from highly ambiguous individual data. Evidently, the interpretation of the product as required by the RCT methodology would not yield a valid conclusion of efficacy in this case.

In short, if assumed that the numbers are representative for psychopathology and the method is representative for the process of change the case’s numbers would seem to speak in favour of the anticipated efficacy, yet the explanation of the numbers tells a diametrically opposed story. Therefore, the assumption that the numbers speak for themselves as a result of an objective process of study appears to be an idle dream in this case. Whereas the claim of procedural objectivity in psychotherapy research affirms a belief in the validity of ‘the evidence’, this case illustrates a lack of self-evidence of numbers found in efficacy research. Given that the EBT movement is increasingly influential towards global clinical practice, taking the numbers for granted by means of assumed methodological objectivity might eventually harm the primary goal of aid in psychology as a clinical field. Therefore, in this paper I will opt for the seemingly paradoxical methodological conclusion that objectivity in psychotherapy research would benefit from a more subjective approach, in which both aggregated numerical and individualized explanatory data are used to derive a clinically valid stamp of approval on the efficacy of psychotherapies.

 

 

References

[1] Wampold, B. E. (2001). The great psychotherapy debate. Models, methods and findings. New York: Routledge.

[2] Chambless, D. L., & Hollon, S. D. (1998). Defining empirically supported therapies. Journal

of Consulting and Clinical Psychology, 66, 7–18.

[3] Chambless, D. L., & Ollendick, T. H. (2001). Empirically supported psychological interventions: Controversies and evidence. Annual Review of Psychology, 52, 685-716.

[4] Danziger, K. (1990). Constructing the subject. Historical origins of psychological research. Cambridge: Cambridge University Press.

[5] Stiles, W. B. (2006). Numbers can be enriching. New Ideas in Psychology, 24, 252–262.

[6] Douglas, H. E. (2009). Science, policy, and the value-free ideal. Pittsburgh: University of Pittsburgh press.

[7] Eysenck, H. J. (1952). The effects of psychotherapy: An evaluation. Journal of Consulting Psychology, 16, 319–324.

[8] De Groot, A. (1994). Methodologie. Grondslagen van onderzoek en denken in de gedragswetenschappen. Assen: Van Gorcum.

[9] Cronbach, L., & Meehl, P. (1955). Construct validity in psychological tests. Psychological Bulletin, 52, 281-302.

 

 

De Treatment Manual als boegbeeld van Evidence-Based Treatment: hoe evident is geprotocolleerd behandelen in praktijk?

Studiedag Vlaamse Vereniging voor Klinisch Psychologen
20 May 2018, Leuven, Belgium

De Treatment Manual als boegbeeld van Evidence-Based Treatment: hoe evident is geprotocolleerd behandelen in praktijk?

 

Abstract

Achtergrond
De American Psychological Association erkent ruim 320 evidence-based treatments(EBTs), op basis van treatment manualsdie wetenschappelijk effectief zijn bevonden. Zowel klinisch als beleidsmatig worden behandelprotocollen toenemend omarmd als garantie voor kwaliteitszorg. In de klinische praktijk klinkt echter kritiek op de toepasbaarheid in individuele therapie. In dit paper gaan we de empirische evidentie na voor de meerwaarde van behandeling op basis van een behandelprotocol.

Methode
In een systematisch literatuuronderzoek onderzochten we vier hypothesen die gesteund zouden moeten worden voor geprotocolleerd behandelen als factor in behandeleffectiviteit: geprotocolleerde behandeling (1) is effectiever dan treatment-as-usualen (2) dan syndroomspecifieke niet-geprotocolleerde behandeling, (3) heeft een grotereeffect sizeen (4) wordt gemedieerd door manual adherence.

Resultaten
Er werd nauwelijks empirisch onderzoek gevonden waarin geprotocolleerde met niet-geprotocolleerde behandeling werd vergeleken. Twee reviews die dateren van voor de populariteitstoename van behandelprotocollen toonden geen significante invloed op effect size. In recente studies waarin onze hypothesen indirect werden onderzocht, kwam voor geen van de hypothesen eenduidige steun naar voren.

Discussie
Uit ons literatuuronderzoek volgde verrassend weinig aandacht voor de meest basale aanname van EBT. Treatment manuals werden ontwikkeld vanuit de logica van systematisch effectiviteitsonderzoek, waarin een gestandaardiseerd behandelproces omwille van vergelijkbaarheid vereist is. Dientengevolge werd overwegend evidentie gevonden voor gestandaardiseerde behandelingen, waaruit de premisse volgde dat gestandaardiseerde behandeling evidence-based is en daarmee superieur ten aanzien van niet-gestandaardiseerde behandeling. In dit paper staan we stil bij de implicaties van onze bevindingen voor de praktijk. We vragen de aanwezigen om actief mee te spreken opdat het gebrek aan wetenschappelijke evidentie als opening kan fungeren voor een dialoog tussen wetenschappelijke onderzoeksvoering en klinisch werk.

 

 

 

Validity in times of measurement

BACP Research Conference 2016, Methodological Innovation Paper
18-21 May 2016, Brighton, United Kingdom

Validity in times of measurement. On the epistemic validity of test validity in psychotherapy research

 

Abstract

Background and introduction
In psychotherapeutic research on Evidence-based Treatments (EBTs), treatment efficacy is operationalized as the numerical mean of individual treatment successes. In psychotherapy research, efficacy numbers are derived in randomized controlled trialsi by symptom measurement with validated symptom measuresii. ‘Validity’ refers to the test validity of such measures – i.e. to the adequacy of a measure as a means to satisfy its proposed endiii – which is taken as vital for the validity of the research procedure. However, this paper argues that test validity is principally insufficient as a means for overall epistemic validity of psychotherapy research.

Nature of the methodological innovation/critique being proposed
In psychological methodology, validity is strictly bound to instruments, such as the Beck Depression Inventory (BDIiv) that has detection of depression symptoms as its end. However, inapplication the measure becomes a means towards an epistemic end, which may differ from the end of the measure itself. Consequently, the test validity of a symptom measure is only part of the epistemic validity of the operationalization of treatment efficacy per se. In this paper, epistemic validity are discussed both on conceptual and operational level, given that the conceptual level is meaningless in psychotherapy research without its operationalization, yet the operationalization implies multiple validity questions that reach beyond test validity, which implies a paradox in the validity of applied measures in psychotherapy research. This paradox is discussed via an empirical case study from our mixed method psychotherapy studyv. Two interpretations of its idiosyncratic treatment success are sketched, in which the application of measures implies different conclusions on the test validity given a difference in operationalization of ‘treatment success’ – which substantiates the need for a concept of validity that goes beyond test validity in psychotherapeutic epistemology.

Conclusion and relevance to counselling and psychotherapy research practice
In this paper, the clinical relevance of ‘epistemic validity’ in psychotherapeutic methodology is highlighted by discussing it within idiosyncratic data. As individual data forms the basis of aggregated data that is used to derive EBTs, which in turn is translated back to individuals in clinical practice, it is vital to understand the epistemic validity of ‘the evidence’ in Evidence Based Treatments.

 

page1image14179488

How questionnaires shape answers

Society for Psychotherapy Research Conference, World Chapter
27-30 June 2018, Amsterdam, The Netherlands

 

How questionnaires shape answers. On validity and performativity
of ‘the data’ in psychotherapeutic research

 

Abstract 


Background and introduction
In psychotherapeutic research, quantified symptom assessment by means of validated self-report questionnaires is default methodological practice to study pre-post symptom change over a course of treatment. A multitude of problems are voiced in the literature regarding the validity of numerical assessment. Nonetheless, quantified symptom measurement is often regarded as ‘the best we have’. In this paper, we argue that it is not feasible to take the issues for granted in gathering ‘the data’, as questionnaires not only are hard (if not impossible) to interpret straightforwardly and universally, but they also actively change the object of interest, which has severe implications for the epistemic value of psychotherapy research.

Method
We structure our argument around data of two patients who participated in the Ghent Psychotherapy Study (GPS; Meganck et al., 2017). We discuss their scoring on a battery of symptom- and psychological wellbeing measures that were administered repeatedly before, during and after treatment. First, we discuss the interpretation of their quantitative scores given the idiosyncratic context. Second, we use interview data to zoom in onto the processof the scoring and its meaning for these particular participants.

Results
Both cases were seriously affected by having to score the questionnaires, which has a salient impact on their primary symptoms – for one patient positively, for the other negatively – and therefore on the pre-post difference that is obtained to study treatment effect. By means of these case discussions, we reiterate the nature and complexity of numerical representation, but also show how the act of administration creates a surplus that would not have occurred in treatment outside the context of psychotherapy research. We call this the performativityof data collection.

Discussion
This performativity of questionnaire administration has severe consequences for the ‘object of interest’, which is change due to psychotherapeutic treatment. As the administration of questionnaires affects the pre-post difference scores beyond the treatment effect itself, performativity in principle prohibits straightforward interpretation of numbers that we are used to in psychotherapy research. This is not necessarily bad, as it does provide a wealth of clinically relevant information on patient behaviour and change factors, yet it is crucial to acknowledge that simple straightforward interpretation is simply illusory. We argue that it is necessary to start asking our research questions differently, such that it actually fits ‘the data’ gathered in psychotherapy research.

What is it like to be a dependent variable?

CiNaPS Conference: “Causality in the Neuro- and Psychological Sciences”
29-30 October 2018, Antwerp, Belgium.

What is it like to be a dependent variable?

Abstract
In psychotherapeutic research, the ‘randomized controlled trial’ is held as gold standard to reach causal statements on treatment efficacy.[i]In this interventionist design, the therapeutic intervention is considered the independent variable and is either contrasted to a no-treatment control or an alternative treatment parallel condition.[ii]Iff all possibly influencing factors are kept constant, the difference in interventions is thought to causea measured or observed difference in the dependent variable, which is the amount of symptoms shown by patients, the receivers of treatment. Within this design, it is thus assumed that the human beings who participate in the study are the carriers of specified variables.[iii]Critical literature is primarily focused on validity of instruments, demarcation and measurement of variables, epistemic issues in randomization and circumstances of treatment, et cetera, yet little attention is given to the most vital predisposition of this design, which is the participant. As it is his or her collected ‘data’ that allow for an aggregated comparison of outcomes, this paper is focused on the process of data retrieval from patients who follow treatment in a research context. What is it like for patient-participants to be considered a ‘dependent variable’, and how may that affect the object under investigation, the therapeutic intervention?

In this paper, we discuss a case who voluntarily participated in a randomized controlled efficacy study that was focused on psychotherapy for major depression.[iv]The patient, a 47-year old male, was randomly assigned to a psychotherapist for a 20-session weekly depression treatment. Before, during and after treatment, he was interviewed by ‘his own’ researcher. Narrative from therapy andresearch were analyzed with an interpretative phenomenological approach[v], to understand how the patient experienced the ‘role’ or ‘function’ of being a participant in a scientific study, and how this may have impacted his therapeutic process.

The selected case is a critical case, which is not necessarily representative for the larger population, but is highly informative towards the attribution of causality based on the strictly defined methodological procedure in psychotherapy research.[vi]By discussion of the idiosyncrasy of experience and the perceived impact of the research procedure onto the data gathered during the therapeutic intervention, we question whether it is justified to aggregate data from this idiosyncratic case into group data, which is a vital element to the concept of causality within the gold standard method. The question is not how causality works for our individual patient, as it is assumed that such individual processes would level out via randomization on group level. Rather, the question is whether the causal attribution within the rationale of an interventionist design remains valid when the object of study appears to bealtered by the act of studying it.

[i]Wampold, B. E. (2001). The great psychotherapy debate. Models, methods and findings. New York: Routledge;
cf. Cartwright, N. (2010). What are randomised controlled trials good for? Philosophical Studies, 147, 59–70.

[ii]Kendler, K. S., & Campbell, J. (2009). Interventionist causal models in psychiatry: repositioning the mind–body problem. Psychological Medicine, 39, 881–887.

[iii]Cf. Woodward, J. (2011). Data and phenomena: a restatement and defense. Synthese, 182, 165-179.

[iv]Meganck, R., Desmet, M., Bockting, C., Inslegers, R. Truijens, F., De Smet, M., et al. (2017). The Ghent Psychotherapy Study (GPS) on the differential efficacy of supportive-expressive and cognitive behavioral interventions in dependent and self-critical depressive patients: study protocol for a randomized controlled trial. BioMed Central, 18:126. doi.: 10.1186/s13063-017-1867-x.

[v]Smith, J. A., & Eatough, V. (2007). Interpretive Phenomenological Analysis. In E. Lyons & A. Coyle (eds). Analysing qualitative data in psychology. London: Sage.

[vi]Cf. Truijens, F. L. (2016). Do the numbers speak for themselves? A critical analysis of procedural objectivity in psychotherapeutic efficacy research. Synthese, 194, 4721-4740.