Sarcasm is a language that is barely understood by many in real life, so it will take special skills to get your bots on social media up to speed.
In recent times we have seen a spate of tweets from irate customers of airlines that were sarcastic. Some of them go like this:
@airlinesNAME Losing my bag is a great way to keep me as a customer.
But the responses from the airline brands made it clear that it was anything but a human responding to them. The algorithms that power the responder bots mistakenly profile this tweet as a positive sentiment and end up responding the wrong way. Delta, American Airlines, Indigo are some of the brands that have faced sarcasm from their customers, but because the auto-responder bots are unable to identify sarcasm, irate tweets ended up taking up the shape of PR crises.
Sarcasm is a tricky thing. It is not something that every human understands every single time. Some people understand and use sarcasm far better than others do. And often it goes unnoticed, or is not understood, or misunderstood. So expecting bots to pick it up is a bit farfetched.
Human language can be complicated, and expecting to teach every possible type, slang, nuance, emotion, intent and context, and the combinations thereof, to an algorithm is challenging. However, customer-facing organisations that deal with a huge number of queries every day are looking for ways to get efficient, and auto-responder bots, chatbots and voicebots are the way forward for them. And because social media is a great channel for customers and brands to communicate with each other, we need to look at ways of getting better at addressing some of these language-related challenges.
There are off-the-shelf packages that monitor social media for the sentiments being expressed. But brands will have to look beyond some of these and build their own sentiment analysis models that leverage deep learning algorithms, to be able to identify sarcasm and ensure they are handled better more often than not.
In a regular human interaction, sarcasm is conveyed implicitly. That is, I do not tell you that I have just used sarcasm (the fact that I have to do so defeats the very purpose of sarcasm). But then if you have been online for a short while also, you will know how difficult it is to be sarcastic. People construe things to be true and miss the sarcasm altogether. Which is why there is a growing usage of #sarcasm now. This helps people to convey to their audience that they mean their post to be sarcastic. So that could become one of the starting points for algorithms to pick up sarcasm.
The technologies that form the base for building sentiment analysis models include natural language processing and deep learning models. So effectively, technologies are being leveraged to go through the text and make sense of what is being said, and then having a continuous way for an algorithm to learn what is a better result in different contexts.
If someone tweets, “I’m not happy with the customer service,” it’s pretty straightforward and can be handled through feature extraction and tagging. But things are not that straightforward.
Let’s look at some other ways:
Build something that is specific to Twitter
Twitter has its own unique features, limited number of characters, which forces a person to write differently than she usually does. So the algorithm that can read long forms of content and detect sentiment or intent may not be best suited for Twitter. Hence, a custom Twitter-specific model is likely to be a better approach. Similarly, if a brand is active on other channels and gets feedback there, brand teams have to focus on the unique aspects of those platforms and build models to suit them.
Go beyond detecting contrast
Typically, sarcasm is detected by a contrast of sentiment in a statement. The sentence could start with a positive sentiment, and end with a negative one. But take a look at this one that Indigo had to deal with: “Thank you for sending my baggage to Hyd and flying me to Calcutta at the same time.” There is no marked negative sentiment except for the “same time” in context to the two other entities. Be aware that sarcasm is not always a contrast of sentiments. And the model must see how to learn that these sentences mean sarcasm.
Think before you delete those punctuation marks
In regular sentiment analysis models, punctuation marks, quotes and interjection words are stripped out during pre-processing. However, these can prove to be especially useful to identify the correct sentiment, especially when it comes to sarcasm.
Use entity extraction smartly
Most sentiment analysis models use 2-gram extraction which looks at using two contiguous units of text at a time. But one can also consider 3-gram extraction for Twitter analyses. Also working with 1-gram, 2-grams, 3-grams and 4-grams right after a positive sentiment might lead to a better analysis of the text.
Having a #sarcasm in a tweet is a quick and easy identifier. But many systems do not consider hashtags and remove them in the pre-processing. And these could hold the clues to identifying sarcasm. In the Indigo tweet we mentioned above, the customer had used #DieIndigo, which should have been enough to detect a negative sentiment, if not sarcasm.
Use dictionary lookups
Lookup dictionaries can be built with common phrases, slangs, acronyms etc. And the algorithm can look up these dictionaries to standardise objects during pre-processing.
Don’t forget the emojis
Emojis and smileys in a tweet reflect the sentiment being expressed, and thus hold the key to uncovering what the post holds.
Go beyond discrete
Instead of using discrete classes to identify a sentiment as positive, negative, neutral or mixed, use continuous scalar classes to arrive at the sentiment. That increases the chances of identifying a sentiment that lies within a large spectrum.
Use available data sets for learning
There are publicly available Twitter and online reviews data sets that you can use to train the model. Some of them also have classifications for Sarcasm, Emoticons, Hashtags etc.
Use dependency trees
Dependency trees can be built and sentiment tagged for each token. The key is to tag the appropriate sentiment to a parent.
For a more complex model, context based weights can be applied by considering many attributes of a tweet. This could look at the author, the intended recipient, the replies, and so on. This will require supervised learning while training the model.
Many scripts are already available to help with many of these things covered here. But it would require brand managers to blend it with custom code and get to a semi-supervised model that can work well for the brand.
This is a domain that will see a lot of work in the coming years and will see some exciting innovations happening.
(Disclaimer: The views and opinions expressed in this article are those of the author and do not necessarily reflect the views of YourStory.)
Want to make your startup journey smooth? YS Education brings a comprehensive Funding and Startup Course. Learn from India's top investors and entrepreneurs. Click here to know more.