English nerd_nerd_random3_seed2_twitter_roberta_base_2021_124m RoBertaForSequenceClassification from tweettemposhift

Description

Pretrained RoBertaForSequenceClassification model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.nerd_nerd_random3_seed2_twitter_roberta_base_2021_124m is a English model originally trained by tweettemposhift.

Download Copy S3 URI

How to use

     
documentAssembler = DocumentAssembler() \
    .setInputCol('text') \
    .setOutputCol('document')
    
tokenizer = Tokenizer() \
    .setInputCols(['document']) \
    .setOutputCol('token')

sequenceClassifier  = RoBertaForSequenceClassification.pretrained("nerd_nerd_random3_seed2_twitter_roberta_base_2021_124m","en") \
     .setInputCols(["documents","token"]) \
     .setOutputCol("class")

pipeline = Pipeline().setStages([documentAssembler, tokenizer, sequenceClassifier])
data = spark.createDataFrame([["I love spark-nlp"]]).toDF("text")
pipelineModel = pipeline.fit(data)
pipelineDF = pipelineModel.transform(data)


val documentAssembler = new DocumentAssembler()
    .setInputCols("text")
    .setOutputCols("document")
    
val tokenizer = new Tokenizer()
    .setInputCols(Array("document"))
    .setOutputCol("token")

val sequenceClassifier = RoBertaForSequenceClassification.pretrained("nerd_nerd_random3_seed2_twitter_roberta_base_2021_124m", "en")
    .setInputCols(Array("documents","token")) 
    .setOutputCol("class") 
    
val pipeline = new Pipeline().setStages(Array(documentAssembler, tokenizer, sequenceClassifier))
val data = Seq("I love spark-nlp").toDS.toDF("text")
val pipelineModel = pipeline.fit(data)
val pipelineDF = pipelineModel.transform(data)

Model Information

Model Name: nerd_nerd_random3_seed2_twitter_roberta_base_2021_124m
Compatibility: Spark NLP 5.5.0+
License: Open Source
Edition: Official
Input Labels: [document, token]
Output Labels: [class]
Language: en
Size: 468.3 MB

References

https://huggingface.co/tweettemposhift/nerd-nerd_random3_seed2-twitter-roberta-base-2021-124m