Objectivity in psychotherapy research: Do the numbers speak for themselves?

9th Munich-Sydney-Tilburg Conference “Evidence, Inference, and Risk”
31 March-2 April 2016

 

Objectivity in psychotherapy research: Do the numbers speak for themselves?

Short Abstract
Psychotherapy research is characterized by a quest for evidence-based treatment. Systematic numerical comparison by means of randomized controlled trials is held as objective methodology resulting in evidence on efficacy of psychological treatments. In this pursuit, numbers are taken as speaking for themselves. However, I argue that the assumed procedural objectivity does not yield objectivity of evidence resulting from the method of choice. This discussion I base on the analysis of a clinical case example from our own mixed method psychotherapy research, in which the numbers could speak for themselves, yet the conclusion does not at all.

Extended abstract
Contemporary psychotherapy research is characterized by its pursuit of evidence-based treatment (EBT)[1]. Evidence is primarily defined as the result of Randomized Controlled Trials (RCTs) that show a significant decrease of psychopathological symptoms due to a course of specified therapy (see canonical papers [2]and [3]). That is, efficacy of treatments is stated by means of rigorous randomized comparison and statistical significance- or null hypothesis testing. In this ‘golden standard’ approach of efficacy research, two premises are found:

  1. numbers are assumed to be representative for levels of psychopathology (either on group- or individual level)
  2. the difference in aggregated pre- and post numbers (i.e. symptom levels prior to and following treatment) is assumed to be representative for treatment efficacy over time.

This quantitative approach gained popularity since the 1950’s [cf. 1, 4], after the publication of condemnatory reports on the actual efficacy of treatments offered by a field known for dogmatism and theoretical bias. Numbers on the other hand were seen as speaking for themselves, as they could function as value-free representations of observable behaviour. Therefore, in order for psychology to be a science proper, numerical methodology was advocated as a means to study human values and experiences as theobject of science without relying on those in the method itself. Although nowadays no researcher would insist that such representations are entirely free of values and representational issues, numerical scrutiny is still increasingly advocated as the safest bet towards a rigorous scientific pursuit [cf. 5].

In this pursuit, the idea that numbers offer the most neutral and therefore the best approach to study psychotherapy efficacy is evidently a non-empirical value [6]. The objectivity claim underlying this methodological pursuit thus could not be called ‘detached objectivity’ – which Heather Douglas defines as referring to knowledge claims that are established on the basis of evidence, rather than on the basis of values that have had no empirical support – despite claims of that nature by early proponents of the numerical method [cf. 7, 8]. Rather, the claim in this pursuit could be typified as ‘procedural objectivity’, regarding the assumption that a non-erroneous process of knowledge generation results in objective knowledge as its product [6]. In psychotherapy research it is indeed assumed that if an RCT is conducted properly, the findings represent the efficacy of treatments objectively. That is, by following proper numerical methods for ‘objective analysis’, the evidence speaks for itself.

However, in this paper I will question whether the interpretation of the product in the way required by the RCT process is justified – which may provoke doubts on the validity of the assumption underlying the methodology itself in return. This discussion I will base on a case from the psychotherapy research conducted at our department, in which patients in a clinical setting are followed routinely throughout treatment using validated quantitative as well as qualitative measures. As this patient’s numbers tell the exact story that is anticipated in efficacy research, the case seems a researcher’s dream. In psychotherapy research, numbers are used to operationalize presence of psychopathology, by means of frequencies or ranges of symptoms. Symptoms are ‘observed’ using pre-fixed quantified checklists or numerically ordered questionnaires, then combined into sum scores and compared via RCTs to populations’ means and/or probability distributions. The evidence resulting from this methodology thus is found on the basis of numerical questionnaires, which are themselves validated statistically [9].

Problematic though is that all numbers in the approach, from the most preliminary symptom level scores up to intricate aggregated statistical test results, rely on the assumption that numerical questionnaires are representative for the object of science, namely human behaviour. The case discussed in this paper repeatedly completed a questionnaire focused at a specific symptom of depression over a course of therapy. Whereas the numbers in such a questionnaire are thought to be representative of the symptom and therefore indicative of a state of wellbeing, interview data revealed that subjectively, the patient understood the numerical questions radically different than was intended. In narrative information of the treatment sessions the wellbeing of the patient appeared highly alarming rather than improved. Consequently, this case is not conclusive about the validity of the first assumption noted above, namely that numbers are representative for levels of psychopathology, as the narrative information tells a diametrically opposed story.

The second assumption posed in the procedural objectivity claim regards the idea that aggregated pre- and post difference numbers in RCTs are indicative for a process of cure. ‘Cure’ is operationalized as a reduction of numerical symptom levels due to a course of therapy. Consequently, efficacy is concluded upon if ‘the numbers’ show a significant decrease of scores over time. Graphically, a diagram showing a trend line with a negative slope is seen as good, whereas a less negative, a flat or a positive slope would indicate inefficacy of treatment. As the scores of our case show the ‘good’ trend line, the results would have a fairly positive influence on the aggregated test of significance, which is conducted in a statistical sequence up from such individual pre-post difference scores. Paradoxical though is that this pre-post difference number is taken as the unproblematic starting point for calculating overall efficacy, whereas it is in fact derived from highly ambiguous individual data. Evidently, the interpretation of the product as required by the RCT methodology would not yield a valid conclusion of efficacy in this case.

In short, if assumed that the numbers are representative for psychopathology and the method is representative for the process of change the case’s numbers would seem to speak in favour of the anticipated efficacy, yet the explanation of the numbers tells a diametrically opposed story. Therefore, the assumption that the numbers speak for themselves as a result of an objective process of study appears to be an idle dream in this case. Whereas the claim of procedural objectivity in psychotherapy research affirms a belief in the validity of ‘the evidence’, this case illustrates a lack of self-evidence of numbers found in efficacy research. Given that the EBT movement is increasingly influential towards global clinical practice, taking the numbers for granted by means of assumed methodological objectivity might eventually harm the primary goal of aid in psychology as a clinical field. Therefore, in this paper I will opt for the seemingly paradoxical methodological conclusion that objectivity in psychotherapy research would benefit from a more subjective approach, in which both aggregated numerical and individualized explanatory data are used to derive a clinically valid stamp of approval on the efficacy of psychotherapies.

 

 

References

[1] Wampold, B. E. (2001). The great psychotherapy debate. Models, methods and findings. New York: Routledge.

[2] Chambless, D. L., & Hollon, S. D. (1998). Defining empirically supported therapies. Journal

of Consulting and Clinical Psychology, 66, 7–18.

[3] Chambless, D. L., & Ollendick, T. H. (2001). Empirically supported psychological interventions: Controversies and evidence. Annual Review of Psychology, 52, 685-716.

[4] Danziger, K. (1990). Constructing the subject. Historical origins of psychological research. Cambridge: Cambridge University Press.

[5] Stiles, W. B. (2006). Numbers can be enriching. New Ideas in Psychology, 24, 252–262.

[6] Douglas, H. E. (2009). Science, policy, and the value-free ideal. Pittsburgh: University of Pittsburgh press.

[7] Eysenck, H. J. (1952). The effects of psychotherapy: An evaluation. Journal of Consulting Psychology, 16, 319–324.

[8] De Groot, A. (1994). Methodologie. Grondslagen van onderzoek en denken in de gedragswetenschappen. Assen: Van Gorcum.

[9] Cronbach, L., & Meehl, P. (1955). Construct validity in psychological tests. Psychological Bulletin, 52, 281-302.

 

 

Comments are closed.