BERT Classifier

Multi-class Emotion Detection in Text has been a conventionally difficult task given the presence of subjectivity, subtleties and stylizations used in text. While a number of conventional and deep learning techniques such as Logistic Regression, LSTM, BiLSTM and BERT have been tested, the project found the BERT Model to be the best performing, and developed an application to classify text from an external dataset into the seven basic emotion classes.

The results from the BERT-based classifier are as follows:

Here’s a fun take!

Teammates: Maobin Guo, Pranav Manjunath, Xinyi Pan

Mentor: Dr. John Haws

Technologies

Google Colab TPU, BERT, Python, Keras, tensorflow, scikit-learn

Tags