Market ResearchTechnology

How good are humans in text analysis?

By 31/01/2019 June 25th, 2019 No Comments

Accuracy is a measure of the capability of technology to interpret a text correctly. Some companies claim the accuracy of their technology is 99%, is that even possible? In today’s video, we are going to refute some myths about text analysis, and we are going to discover the best way to achieve a high level of accuracy.

Today’s video is very different from the other ones because for the first time I talk about Data science. I think there is a lot to do in terms of education, and I hope you will find it very interesting.

I have been leading a data science company for some years now and I was able to put together a list of interesting questions that I am usually asked. Today’s question is: How good are humans at text analysis? Or I could say it differently: what is good accuracy in text analysis?

First things first. What is the accuracy?

Accuracy in our case is a measure of the capability of technology to interpret a text correctly. Correctly means that this software is able to identify which are the topics mentioned by the author of the text, and evaluate whether their sentiment is positive or negative. To evaluate whether the analysis is correct or not, it’s scientifically recommended to have the same dataset manually analyzed by a group of humans. Keep in mind that the bigger the group, the higher the scientificity of the measurement.

Let’s get back to us….If you search the web for text analysis or sentiment analysis software you will probably find very catching advertisings where companies claim the accuracy to be 99% or something like that. Let me tell you one thing…you will never get anywhere close to 99% accuracy, independently from the software that you will use.

Why do I say that you won’t get this accuracy?

Number one: technology is not good enough yet

Number two: humans disagree with each other too often

Let’s start with the first reason, which is less intriguing from a psychological perspective but straightforward to explain. Today’s analysis technologies are good, but not good enough to interpret free texts as you may expect. Most analysis technologies are based on statistical algorithms, like the ones that drive a driverless car. These algorithms work well when they can be trained on very large datasets, I am talking about millions of records. A driverless car collects billions of information each minute it drives around, therefore it’s just perfect for this type of technologies. At the same time, it is unlikely to collect enough texts about a certain topic to train that statistical algorithm for text analysis. So, surprisingly, it is easier to drive a driverless car than to analyze a text…can you believe it?

On the other side, we have rules based algorithms. These ones use predetermined interpretative rules written by humans. What is it? Basically, an analyst teaches the technology how to interpret a sentence, and then another one and so on. So, in this case, the technology doesn’t learn from large datasets but from single, but precise, that commands from a human. As you can imagine, this technique is very time consuming but as of today it is the most accurate one. Am I saying 100% accuracy or so? No, but 90% is feasible and it’s already a very good result.

Let’s now move to the more philosophical reason why you won’t get 99% accuracy: humans disagree with each other too often

I have recently read an experiment conducted by the University of Pittsburgh. It’s called Recognizing Contextual Polarity in Phrase-Level Sentiment Analysis. I really recommend you to read it, it’s online and it’s free. What did they do? They wanted to understand how good were humans in interpreting texts, and we can then compare it to the accuracy of our technologies.

In short, they had a group of humans analyzing the same records, and they found out that, they agree with each other 82% of the time. They literally say that “Human analysts tend to agree 82% of the time, which means that they will always find documents on which they disagree with the machine on”. Let me explain this in simple words, if two people read the same text and the text contains one hundred different topics, these two people will only have 82 topics in common, and would disagree on the remaining 18.

What does it mean from a statistical perspective? It means that moving from 80 or ninety percent of accuracy to 100% would be almost impossible even with the best algorithms, but even if one day we reach that impressive level of accuracy, we will always have more or less 18% of our colleagues thinking that we are wrong.

Very long story short: machines, with the help of humans, can reach levels of accuracy, above 90%. Humans alone hardly go above 80 percent. What’s the lesson learned? We are worse than what we think.

Read more about NLP, the technology that substitutes humans in text analysis, here.

close-link
close-link

Subscribe to Wonderflow's Newsletter

Receive the best content about customer feedback solutions, NPS and much more.
Keep me updated
close-link
close-link

WEBINAR

Creating the best customer experience through AI-driven Customer intelligence

Watch the replay here