Description
This is a multilingual XLM-roBERTa-base model trained on ~198M tweets and finetuned for sentiment analysis. The sentiment fine-tuning was done on 8 languages (Ar, En, Fr, De, Hi, It, Sp, Pt) but it can be used for more languages (see paper for details).
Paper: XLM-T: A Multilingual Language Model Toolkit for Twitter. Git Repo: XLM-T official repository. This model has been integrated into the TweetNLP library.
HF Model: https://huggingface.co/cardiffnlp/twitter-xlm-roberta-base-sentiment
Predicted Entities
class
How to use
from pyspark.ml import Pipeline
from sparknlp.annotator import *
document_assembler = DocumentAssembler() \
.setInputCol('text') \
.setOutputCol('document')
tokenizer = Tokenizer() \
.setInputCols(['document']) \
.setOutputCol('token')
sequenceClassifier = XlmRoBertaForSequenceClassification.pretrained('sentiment_twitter_xlm_roBerta_pdc_en')\
.setInputCols(["document",'token'])\
.setOutputCol("class")
pipeline = Pipeline(stages=[
document_assembler,
tokenizer,
sequenceClassifier
])
# couple of simple examples
example = spark.createDataFrame([['사랑해!'], ["T'estimo! ❤️"], ["I love you!"], ['Mahal kita!']]).toDF("text")
result = pipeline.fit(example).transform(example)
# result is a DataFrame
result.select("text", "class.result").show()
Model Information
Model Name: | sentiment_twitter_xlm_roBerta_pdc |
Compatibility: | Spark NLP 3.3.2+ |
License: | Open Source |
Edition: | Community |
Input Labels: | [document, token] |
Output Labels: | [class] |
Language: | en |
Size: | 1.0 GB |
Case sensitive: | true |
Max sentence length: | 512 |
Dependencies: | xlm_roBerta |