Predicting changes in substance use following psychedelic experiences: natural language processing of psychedelic session narratives

This quantitative interview study (n=1141) applied a machine learning tool to analyze written reports of psychedelic experiences and predicted whether the participants could reduce substance abuse in response to using psychedelics with a 65% accuracy across three independently trained Natural Language Processing models.

Abstract

Background: Experiences with psychedelic drugs, such as psilocybin or lysergic acid diethylamide (LSD), are sometimes followed by changes in patterns of tobacco, opioid, and alcohol consumption. But, the specific characteristics of psychedelic experiences that lead to changes in drug consumption are unknown.

Objective: Determine whether quantitative descriptions of psychedelic experiences derived using Natural Language Processing (NLP) would allow us to predict who would quit or reduce using drugs following a psychedelic experience.

Methods: We recruited 1141 individuals (247 female, 894 male) from online social media platforms who reported quitting or reducing using alcohol, cannabis, opioids, or stimulants following a psychedelic experience to provide a verbal narrative of the psychedelic experience they attributed as leading to their reduction in drug use. We used NLP to derive topic models that quantitatively described each participant’s psychedelic experience narrative. We then used the vector descriptions of each participant’s psychedelic experience narrative as input into three different supervised machine learning algorithms to predict long-term drug reduction outcomes.

Results: We found that the topic models derived through NLP led to quantitative descriptions of participant narratives that differed across participants when grouped by the drug class quit as well as the long-term quit/reduction outcomes. Additionally, all three machine learning algorithms led to similar prediction accuracy (~65%, CI = ±0.21%) for long-term quit/reduction outcomes.

Conclusions: Using machine learning to analyze written reports of psychedelic experiences may allow for accurate prediction of quit outcomes and what drug is quit or reduced within psychedelic therapy.”

Authors: David J. Cox, Albert Garcia-Romeu & Matthew W. Johnson

Notes

The participants in the study were reducing or quitting “alcohol n = 512; cannabis n = 272; opioids n = 195; stimulants n = 162.

The model was trained on 75% of the reports, and then predicted the outcomes for 25% of the remaining reports.

Imagine a computer researcher who’s presented with a page of text and the task of dissecting if the text represents a positive or negative mood. The researcher could read the paper and ‘process’ it to come to a conclusion one way or another. What if we change the scenario and now ask the researcher to do the same for 1000 pages of text? Then we can expect the researcher to pop open a can of Red Bull and start typing away making a ‘Natural Language Processing’ (NLP).

NLPs can analyze text and draw conclusions or insights from a text. This has been used to gauge the mood of voters based on their social media posts, the interest in stocks, and now also the content of someone’s psychedelic trip. NLPs are a part of machine learning or even more broadly artificial intelligence. Using this technique, psychedelic researchers have been able to analyze more than 1100 trip reports to understand what makes people quit drugs (alcohol, cannabis, opioids).

How did they do it?

  • The NLP algorithm analyzed the trip reports that participants wrote. This trip was the one that prompted them to quit taking a drug
  • Based on the text of the report, through trial and error with a subset of the data, the algorithm was able to predict if someone had actually quit a drug with about 65% accuracy
  • This type of approach may be useful for future studies where the written trip report could help identify the people who will quit a drug and those who might need more guidance

Still, this type of analysis is in its infancy. Clinicians may be able to use this type of data as another data point if they are treating a patient for substance addiction, but at this time it does nothing more than giving a likelihood of success. Finding out who’s less likely to achieve a reduction or quitting a drug could be beneficial in putting more resources towards integration and support.

Summary

Experiences with psychedelic drugs may lead to changes in patterns of tobacco, opioid, and alcohol consumption.

Psychedelics have been shown to improve outcomes in studies of tobacco and alcohol use disorders, and in observational studies of religious users of psychedelic-containing plants.

Psychedelic therapies for addiction remain poorly understood, but subjective psychedelic experiences may play a critical role in lasting therapeutic benefits. Automated speech analyses using machine learning (ML) could capture subtle alterations in speech assumed indicative of subjective psychedelic experiences.

Natural language processing (NLP) is a computational technique that quantitatively describes human language (18). NLP has been used to predict human behavior in psychiatry (19), categorize bipolar patients from controls (19), and predict suicidal behavior among hospitalized adolescents (20), and predict clinical response to psilocybin in patients with depression (21).

Methods

Participants were recruited through social media advertisements and completed an anonymous, 40-min, online survey through SurveyMonkey. They were grouped by primary drug class they reduced or quit using (alcohol, cannabis, opioids, stimulants, and ayahuasca).

The survey assessed participants’ drug use before and after the psychedelic experience they attributed to cessation/reduction in drug use. Participants indicated whether the psychedelic experience led to complete abstinence, persistent reduction, or temporary reduction in drug use.

Topic models

We preprocessed psychedelic session narratives using NLTK for Python, then derived three experimental topic models using latent semantic analysis (LSA) with singular value decomposition (SVD). We created a third model after removing alcohol-related words from previous research demonstrating significant associations of specific words with craving and alcohol expectancies.

The four topic models created quantitative descriptions of each participant’s narrative using vectors with n values. The Alcohol Word Count model used 32 numbers to describe each narrative.

Predicting quit outcomes

We trained three different supervised machine learning algorithms using topic models from a randomly selected 75% of participants. The models were used to predict outcomes for the remaining 25% of participants.

Data analysis

We used Python to conduct 10 planned sets of ANOVAs spanning three questions. We compared the model metrics of all four topic models for one supervised ML algorithm.

Results

Figure 1 shows that LSA-All, LSA-Scrubbed, and LSA-Alcohol topic models had higher coherence scores than LSA-All, LSA-Scrubbed, and LSA-Alcohol topic models, respectively.

Table 2 shows the top 10 words by weight for each topic from the topic models. The first word listed had the greatest weight with remaining words listed in decreasing weight.

Differentiating drug quit or reduced

Table 3 shows that the LSA-All topic model differentiated what drug participants reduced/quit using. There were statistically significant differences for all six topics, and three pairwise comparisons had one topic resulting in a significant difference.

The LSA-Scrubbed topic model could differentiate 5 of 6 pairwise comparisons between participants who reduced/quit using drugs following a psychedelic experience, except between alcohol-stimulant groups.

We observed statistically significant differences in the frequency of three alcohol-related words (‘alcohol’, ‘beer’, and ‘drink’) and in the sum of alcohol-related words between participants who reduced/quit alcohol use following a psychedelic experience and participants who reduced/quit a different drug class.

Differentiating quit outcome groups

The LSA-All topic model differentiated for 9 out of 15 possible pairwise comparisons between reduction/quit outcome groups, including ‘stopped completely’, ‘reduced greatly’, ‘reduced somewhat’, ‘stopped but returned to pre-psychedelic rates’, and ‘other’.

We observed statistically significant differences in the LSA-Scrubbed model between reduction/quit outcome groups, with Topic 5 resulting in the largest effect size (n2 = 0.03). Follow-up multiple comparisons showed the LSA-Scrubbed topic differentiated 10 of 15 pairwise comparisons.

Table 9 shows that the frequency of alcohol-related words used differentiated the reduction/quit outcome groups. The reduction greatly group used more alcohol-related words than the stopped completely and reduced somewhat groups.

Predicting outcomes with ML

The k-nearest neighbor algorithm led to statistically significant differences in prediction accuracies between the four topic models. The LSA-Alcohol topic model led to the highest prediction accuracy with median (SD, max) prediction accuracy of 47%.

The nave Bayes Bernoulli algorithm had statistically significant differences in prediction accuracies between topic models, except between the LSA-Scrubbed and LSA-All models. The LSA-Alcohol topic model had the highest median prediction accuracy.

We observed a statistically significant difference in prediction accuracies between the four topic models, except between the LSA-Scrubbed and Alcohol-Related-Word-Count topic models. The LSA-Alcohol topic model had the highest median prediction accuracy.

Discussion

We used NLP to predict drug use outcomes in 1141 individuals who reported decreasing rates of drug consumption following a psychedelic experience. The LSA-All and LSA-Scrubbed topic models performed best in differentiating which drug class participants reduced/quit using.

A larger corpus across multiple drug classes resulted in better quantitative descriptions of psychedelic experiences relative to reduction/quit outcomes. Additionally, topic models were better able to predict long-term participant quit outcomes than the control model.

LSA-Alcohol resulted in the highest prediction accuracies, but LSA-All and LSA-Scrubbed were only slightly worse than LSA-Alcohol for predicting quit outcomes for individuals who consumed multiple drugs.

All three algorithms led to similar prediction accuracies, but nave Bayes Bernoulli led to the highest prediction accuracy using LSA-Alcohol. However, the least complex and resource-intensive algorithm (i.e., k-nearest neighbors) led to similar prediction accuracies.

The limitations of this study include that accuracy of the narratives may have been impacted by the delay between the psychedelic experience and recall of the experience, and that participants were only those who reduced/quit using a drug following a psychedelic experience and were motivated to talk about it.

Conclusion

Past research indicates that some psychedelic experiences are followed by changing drug consumption, including alcohol, opioids, and tobacco. Participants in the present study also reported reducing consumption of cannabis and stimulants.

A study using NLP and supervised ML to predict quit outcomes following a psychedelic experience extends the literature on psychedelic treatment for substance use disorder.

Authors

Authors associated with this publication with profiles on Blossom

Albert Garcia-Romeu
Albert Garcia-Romeu is one of the principal researchers in the renaissance of psychedelics studies. He is doing his research at Johns Hopkins and focuses on psilocybin and how it can help with treating addiction.

Matthew Johnson
Matthew Johnson is an Associate Professor of Psychiatry and Behavioral Sciences at Johns Hopkins University. His research is concerned with addiction medicine, drug abuse, and drug dependence.