Russian BertForSequenceClassification Base Cased model (from cointegrated)

Description

Pretrained BertForSequenceClassification model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP. rubert-base-cased-nli-threeway is a Russian model originally trained by cointegrated.

Predicted Entities

neutral, contradiction, entailment

Download Copy S3 URI

How to use

documentAssembler = DocumentAssembler() \
    .setInputCol("text") \
    .setOutputCol("document")

tokenizer = Tokenizer() \
    .setInputCols("document") \
    .setOutputCol("token")

sequenceClassifier = BertForSequenceClassification.pretrained("bert_sequence_classifier_ru_base_cased_nli_threeway","ru") \
    .setInputCols(["document", "token"]) \
    .setOutputCol("class")

pipeline = Pipeline(stages=[documentAssembler, tokenizer, sequenceClassifier])

data = spark.createDataFrame([["PUT YOUR STRING HERE"]]).toDF("text")

result = pipeline.fit(data).transform(data)
val documentAssembler = new DocumentAssembler()
    .setInputCol("text")
    .setOutputCol("document")

val tokenizer = new Tokenizer()
    .setInputCols("document")
    .setOutputCol("token")

val sequenceClassifier = BertForSequenceClassification.pretrained("bert_sequence_classifier_ru_base_cased_nli_threeway","ru")
    .setInputCols(Array("document", "token"))
    .setOutputCol("ner")

val pipeline = new Pipeline().setStages(Array(documentAssembler, tokenizer, sequenceClassifier))

val data = Seq("PUT YOUR STRING HERE").toDS.toDF("text")

val result = pipeline.fit(data).transform(data)

Model Information

Model Name: bert_sequence_classifier_ru_base_cased_nli_threeway
Compatibility: Spark NLP 4.3.1+
License: Open Source
Edition: Official
Input Labels: [document, token]
Output Labels: [ner]
Language: ru
Size: 667.1 MB
Case sensitive: true
Max sentence length: 128

References

  • https://huggingface.co/cointegrated/rubert-base-cased-nli-threeway
  • https://github.com/felipessalvatore/NLI_datasets
  • https://github.com/sheng-z/JOCI
  • https://cims.nyu.edu/~sbowman/multinli/
  • https://aclanthology.org/I17-1011/
  • http://www.lrec-conf.org/proceedings/lrec2014/pdf/363_Paper.pdf
  • https://nlp.stanford.edu/projects/snli/
  • https://github.com/facebookresearch/anli
  • https://github.com/easonnie/combine-FEVER-NSMN/blob/master/other_resources/nli_fever.md
  • https://github.com/facebookresearch/Imppres
  • https://cs.brown.edu/people/epavlick/papers/ans.pdf
  • https://people.ict.usc.edu/~gordon/copa.html
  • https://aclanthology.org/I17-1100
  • https://allenai.org/data/scitail
  • https://github.com/felipessalvatore/NLI_datasets
  • https://github.com/verypluming/HELP
  • https://github.com/atticusg/MoNLI
  • https://russiansuperglue.com/ru/tasks/task_info/TERRa