This is an extension of a previous analysis in which I visualized text sentiment associated with different pain treatments in a large online chronic pain forum. I wanted to use some machine learning to determine whether the commenters would get better over time based on their comment frequency on treatment types, times of their comments, etc. Below is a feature list of my data, with a quick description of each:
chronic pain -- how often they commented in the 'chronic pain' forum
lower back pain -- same for 'lower back pain' forum
neck pain -- same for 'neck pain' forum
upper back pain -- same for 'upper back pain' forum
therapy -- how often they mentioned some sort of physical therapy, exercise, etc. in the first half of their comments
over the counter -- comment frequency of counter drugs (acetaminophen, naproxen, ibuprofen, etc)
opiod -- comment frequency of opiod drugs
muscle relaxant -- comment frequency of muscle relaxant drugs
benzodiazepine -- comment frequency of benzodiazepine drugs
hypnotics -- comment frequency of hypnotic drugs
anticonvulsants -- comment frequency of anticonvulsant drugs
antidepressants -- comment frequency of anti-depressants
steroids -- comment frequency of steroids
invasives -- comment frequency of invasive treatment (surgery)
numComments -- total number of comments in the first half of their comment history
numDiscusions -- total number of different discussion threads they commented in the first half of their history
1am-4am -- comment frequency between 1:00am to 4:59am (same for other time windows) ...
Sunday -- comment frequency on Sunday (same for other days) ...
Jan-Mar -- comment frequency between January 1st - March 31st (same for other month windows) ...
initSent -- averge sentiment score of the first half of their comment history
For a commenter to be included in the analysis, he/she must have had at least 4 comments separated by 3 days, and they must have made mention of at least 1 of the treatment categories listed above. Additionally, anyone with an anonymous handle was removed because I had trouble determining which commenter was unique. Finally, the average sentiment score of the first half of their comments had to be below -0.3 because I felt this may be a decent surrogate for a pain measurement. In other words, I was interested mostly in people who started out in pain and wanted to see if they would get better (as reflected by the sentiment in their comments). The features were calculated on the first half of their comments, and the task was to predict whether their average sentiment improved later in time, in the second half of comments.
Improvement ('improvement') was positive if their initSent was below -0.3 (so their comments are fairly negative), and the average of the last half of their comments increased by at least 0.3 (so the average of the last half of their comments was greater than 0). This is an arbitrary choice, but the reasoning is consistent with clinical measures of improvment, which is often determined by an average of 20 to 30% improvement in whatever is being measured.
Below is a printout of all the feature labels, and the outcome label, 'improvement':
Here's a quick look at what the data looks like:
Out[2]:
So after whittling down all my data of over 100k comments, it has been reduced to commenters that made at least 4 comments, at least 3 days apart, totaling 440 commenters with at least 4 comments. That's something I'd like to work on later, to keep more data, because this is a serious degradation of the data set. Nonetheless, here's what it boils down to, in terms of positive and negative samples:
I also wrote a quick function for balancing positive and negative samples, just in case I want to check how it influences the results, as there are about 1.5 times more positive than negative samples in the data set:
Here comes the machine-learning part. I'm still trying to familiarize myself with all of Python's great ML tools, so here I'm starting with something simple, logistic regression. Below I've randomly split the data set into one that will be used to train the model, and on that can be used to test it. I've also normalized the feature set:
And here's the result. The percent of samples that were correctly classified in the training set was 65.4%, and the % correct in the test set (based on the model generated by the training set) was ~61.8% (not great, but not absolutely terrible for determining pain outcomes!):
Let's see if this slightly above 50% accuracy has to do with the unbalanced data set. Remember there were about 1.5x more positive than negative samples. Here I've removed more data in the test set by making sure that the total number of positive and negative samples are equal by applying the 'balanced_subsample' function written above:
You can see that this reduced the data set to 92 total samples, 46 positive and 46 negative. The accuracy of the score doesn't change that much, so I'm going to keep working with the unbalanced set. A more suitable accuracy measure for unbalanced data sets is the f1-score, which is the harmonic mean of precision and recall (precision is a ratio of true positives to predicted positives, and recall is the ratio of true positives to actual positives). Below you can see the the average precision, recall, and f1-score are all around 0.6, which again isn't terrible but also not great:
Here are the theta weights of the model, printed as a bar chart, where the greater the magnitude of the bar indicates that feature contributed more to determining the outcome. Negative weights contribute more to no change (and / or decrease in future comment sentiment), while positive values contribute more to increased comment sentiment.
Out[9]:
While the predictive power of the model is not that great, the feature weights make some sense in the realm of chronic pain research. Here, the greatest predictor for future decreased sentiment for a user is the initial sentiment ('initSentiment'). In the same vein, in clinical pain, one of the more reliable predictors of improvement is the initial reported pain (someone who reports higher pain early on is less likely to recover from their pain completely). Here, someone who has initial negative sentiment will continue to have negative sentiment in the later comments.
Additionally, here, frequency of opioid mentions ('opioid') is the strongest treatment indication that future sentiment will either not change, or will get worse. In general, those who are mentioning opioids are expressing negative sentiments which will not improve in future posts. Opioid use for chronic pain is currently one of the most pressing issues in the field due to patients suffering from it's addictive properties. That negativity seems to be expressed in this forum very clearly.
Some other interesting (albeit weaker) results (to me) are that people are less likely to have higher future sentiment scores the more frequently they post during the week, rather than the weekend (are these people who may be out of work due to their pain?). Also, the number of discussions they engage in (and not so much the total number of comments) is a relatively strong predictor that their sentiment will not improve (so perhaps they have many co-morbidities with their pain?). And finally, it appears that people who posted in the lower back pain forum more frequently had a better chance of increased future sentiment, relative to the other conditions (chronic, upper back, and neck pain).
Let's see if regularization can increase the performance of the model. I like to think of regularization as putting a sort of 'low-pass filter' around the decision boundary in the feature space, to reduce overfiting. My conceptualization of this may not be entirely accurate, but it is, in a sense, a way to reduce noise in the model, so I'm going to stick with this conceptuatlization for right now.
Anyway, overfitting is usually evident if you get really great model performance on the training set, and not-so-great performance on the test set. In my case, neither data set had stellar performance, so I don't think the model was overfit. But just to see what happens, let's go ahead and apply regularization. In Python, this can be performed with 'ridge classification'. The alpha parameter (ranging from 0 to 1) adjusts regularization, with higher values effectively increasing the width of the filter window. Here I got moderate improvement of the model by setting it to 0.5:
You can see that the f1-score increased by 0.01 for both training and test sets, so not a great improvement. Here are the theta weights of the ridge model:
Out[11]:
As expected, this looks very similar to the logistic regression model, although now the number of discussions (numDiscussions) edged out initial sentiment (initSentiment) to become the number one predictor for no improvement in future sentiment. Additionally, frequency of Friday / Saturday posting (as opposed to Sunday - Thursday) edged out steroids and posting frequency in the lower back pain forum, to become the greatest predictor of improved sentiment. I don't want to over-interpret the results, but even though the classification is not overwhelmingly impressive, the model itself makes some sense.
I'd like to make improvements in the near future by taking into account that commenters may have misspelled some treatment words, which I hope would increase my sample size. I also removed anonymous commenters -- there were thousands of anonymous comments. Finding a way to identify the unique ones would also increase my sample size. And of course, I'd like to engineer a few more features out of the data set if possible -- in some cases I might be able to glean sex of the commenter by their username, although this may be presumptuous. I'd also like to work determining the model cost as a function of number of samples to determine whether I need to extract more features or add more data overall, as suggested in Andrew Ng's machine learning course on Coursera.
Comments
Post a Comment