Share
In last week’s blog post, we outlined what sentiment analysis is and why it’s so important to understand your customers. We explained how the majority of modern-day sentiment tracking systems rely on NLP (Natural Language Processing) and computer-generated algorithms to detect the emotions behind language, why this can lead to unreliable results, and how our innovative Human Insight service can help eradicate inaccuracies from your social data. Today, we’re diving further into the limitations of AI-based sentiment monitoring, and sharing the huge advantages of using a service run by real people, not bots.
Sarcasm, Irony & Exaggeration
Sarcastic language is one of the biggest problems faced by AI sentiment trackers. In sarcastic comments and messages, people express negativity by using positive words, which presents a challenge for computer algorithms that have been programmed to only view these words and phrases in a good way. Think of how many times you’ve probably exclaimed “Oh, great” when it begins to rain just as you’re about to leave the house, “That’s just brilliant” when your laptop battery dies halfway through an important project, or “Mmm, delicious” when removing last week’s expired leftovers from the fridge!
Numerical sarcasm is a particularly common type of sarcasm on social media, which as this report explains, is another prominent reason for AI errors in sentiment detection. The title of the report, “Having 2 hours to write a paper is fun!”, is a key example of numerical sarcasm, along with phrases such as “I love how my phone takes 5 hours to charge!” and “Thanks for finally responding to me, it only took you 2 weeks!”.
Unlike bots, humans can use contextual clues and other conversational elements to work out when someone is being sarcastic online, meaning they can categorise comments more effectively than automated systems. Our Human Insight experts are highly skilled in analysing different tones of voice and identifying cases where exaggeration is being used to convey the opposite emotion. This makes your sentiment reports 100% accurate.
Slang Terms and Informal Language
Another big drawback of computer-generated sentiment monitoring is its inability to understand slang terms and informal language. Think of adjectives like “sick”, “snatched” and “fire”, all of which would most likely be labelled negative by a bot, but are commonly used by younger generations on social media to convey positive emotions. Similarly, acronyms and other modern Internet terms like “the GOAT” (greatest of all time), “no cap” (I’m not kidding/no joke) and “SMH” (shaking my head, disapproving) can present problems for bots. Even a simple word like “mood” (I agree, I’m feeling this), commented under a brand’s social media post, can mean that the user feels positively about what’s been said - but an AI system is unlikely to realise that this is the case and might leave the comment undetected.
What’s more, social media slang is constantly evolving and it can be very difficult for algorithms to keep up with the rapid changes in online vocabulary. Digital trend cycles are extremely short, sometimes lasting just a few weeks. The meanings of words can quickly switch from positive to negative and vice versa, resulting in complications and confusion for AI systems which have been trained in a binary manner.
Regional Dialects and Accent Phonetics
The UK is home to a vast range of different dialects and accents which greatly impact the ways in which people communicate online. This can cause further difficulties for AI sentiment trackers.
In Scotland, “greeting” can be used to mean “crying”. In the North of England, “dead” can be used to mean “very” (“I’m dead excited”), “made up” can be used to mean pleased or delighted (“I’m absolutely made up about my promotion”) and “scran” means “food”. In MLE (Multicultural London English/Urban English), “peng” means “attractive” or “looks good” and “gassed” means “excited”. Plus, social media users often type their posts using the phonetics of their accent, such as Liverpudlians using “yer” instead of “your” and Scots using “deed” instead of “dead”.
These are just a few examples of regional traits which can be hard for computer algorithms to decipher - whether they end up sorting comments into the wrong category or don’t manage to pick them up at all. There are so many more area-specific language characteristics that can only be understood by real people with genuine conversational experience. If your brand has multiple locations across the country, it’s highly likely that some of your incoming comments and messages will contain dialect variations, which can impact the results of your sentiment analysis efforts when using AI. Our Human Insight team has real-world experience and is able to translate accent features much more effectively than bots, helping build a more precise picture of your brand perception.
Spelling & Grammatical Errors
In contrast to brands and businesses, who should always double-check their spelling and grammar before publishing a post, the majority of casual social media users see their favourite platforms as informal spaces where they can communicate in a relaxed manner. People feel less pressured to spell every word correctly and write in the same refined style they might have to use at work or school. This means that errors regularly appear and non-standard grammatical structures are frequently used.
Once again, this can prove difficult for AI-based sentiment monitors to comprehend. If a sentence hasn’t been constructed in the way a computer expects, it might struggle to work out what the person is trying to say, therefore rendering it unable to detect the sentiment.
Industry-Specific Terms
Industry-specific language can also present challenges for bots. When it comes to hospitality brands in particular, we’ve seen culinary terms like “slow-cooked” or “slow-roasted” resulting in negative sentiment being wrongly identified. Without context, the word “slow” would usually be considered a bad thing in a restaurant setting - slow service, a slow booking system, or a slow response to a request. But when paired with “cooked” or “roasted”, we can see that the commenter is simply naming the type of dish they tried during their visit.
In fact, this was one of the first errors noticed by our Business Development Manager Lucy, during her research into one of the market leaders in sentiment analysis. This prompted her to begin building Human Insight!
Emojis and Other Symbols
Based on 7 billion tweets over 10 years, emoji use has never been higher. More than one in five tweets now includes an emoji and 5 billion emojis are sent daily on Facebook Messenger. Symbols like these are widely used on social media to express emotions, moods and ideas, but they can be hard for computer algorithms to understand - especially when used for sarcastic purposes, as explained above. For example, the text portion of a comment may seem positive, but if the user has included a rolling eyes emoji (🙄 ) at the end, its meaning completely changes. Similarly, laughing emojis (😂 🤣 ) can of course imply happiness or joy, but are often used on social media to indicate irony or a mocking tone.
Real people with knowledge of current social media trends can use context to decode the sentiment behind these messages, whereas bots might make mistakes when labelling them, ultimately skewing the outcome of your analysis.
Summary
Overall, it’s clear to see that AI-based sentiment monitoring systems can’t achieve the same level of detail and accuracy as real people. Is your brand struggling to collect reliable data and find out how it’s really perceived online? Get in touch with 3sixfive today to learn how our Human Insight service can transform your sentiment tracking results and will enable you to make truly effective business changes that will elevate your customer experience to new heights.