Description
Pretrained DistilBertForSequenceClassification model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.distilbert_emotion_bcokdilli is a English model originally trained by bcokdilli.
How to use
     
documentAssembler = DocumentAssembler() \
    .setInputCol('text') \
    .setOutputCol('document')
    
tokenizer = Tokenizer() \
    .setInputCols(['document']) \
    .setOutputCol('token')
sequenceClassifier  = DistilBertForSequenceClassification.pretrained("distilbert_emotion_bcokdilli","en") \
     .setInputCols(["documents","token"]) \
     .setOutputCol("class")
pipeline = Pipeline().setStages([documentAssembler, tokenizer, sequenceClassifier])
data = spark.createDataFrame([["I love spark-nlp"]]).toDF("text")
pipelineModel = pipeline.fit(data)
pipelineDF = pipelineModel.transform(data)
val documentAssembler = new DocumentAssembler()
    .setInputCols("text")
    .setOutputCols("document")
    
val tokenizer = new Tokenizer()
    .setInputCols(Array("document"))
    .setOutputCol("token")
val sequenceClassifier = DistilBertForSequenceClassification.pretrained("distilbert_emotion_bcokdilli", "en")
    .setInputCols(Array("documents","token")) 
    .setOutputCol("class") 
    
val pipeline = new Pipeline().setStages(Array(documentAssembler, tokenizer, sequenceClassifier))
val data = Seq("I love spark-nlp").toDS.toDF("text")
val pipelineModel = pipeline.fit(data)
val pipelineDF = pipelineModel.transform(data)
Model Information
| Model Name: | distilbert_emotion_bcokdilli | 
| Compatibility: | Spark NLP 5.5.0+ | 
| License: | Open Source | 
| Edition: | Official | 
| Input Labels: | [document, token] | 
| Output Labels: | [class] | 
| Language: | en | 
| Size: | 249.5 MB | 
References
https://huggingface.co/bcokdilli/distilbert-emotion