English DistilBertForSequenceClassification Cased model (from MoritzLaurer)

Description

Pretrained DistilBertForSequenceClassification model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP. policy-distilbert-7d is a English model originally trained by MoritzLaurer.

Predicted Entities

economy, political system, welfare and quality of life, fabric of society, external relations, freedom and democracy, social groups

Download Copy S3 URI

How to use

documentAssembler = DocumentAssembler() \
        .setInputCol("text") \
        .setOutputCol("document")

tokenizer = Tokenizer() \
    .setInputCols("document") \
    .setOutputCol("token")

sequenceClassifier_loaded = DistilBertForSequenceClassification.pretrained("distilbert_sequence_classifier_policy_distilbert_7d","en") \
    .setInputCols(["document", "token"]) \
    .setOutputCol("class")

pipeline = Pipeline(stages=[documentAssembler, tokenizer,sequenceClassifier_loaded])

data = spark.createDataFrame([["PUT YOUR STRING HERE"]]).toDF("text")

result = pipeline.fit(data).transform(data)
val documentAssembler = new DocumentAssembler() 
          .setInputCol("text") 
          .setOutputCol("document")

val tokenizer = new Tokenizer() 
    .setInputCols(Array("document"))
    .setOutputCol("token")

val sequenceClassifier_loaded = DistilBertForSequenceClassification.pretrained("distilbert_sequence_classifier_policy_distilbert_7d","en") 
    .setInputCols(Array("document", "token")) 
    .setOutputCol("class")

val pipeline = new Pipeline().setStages(Array(documentAssembler, tokenizer,sequenceClassifier_loaded))

val data = Seq("PUT YOUR STRING HERE").toDF("text")

val result = pipeline.fit(data).transform(data)
import nlu
nlu.load("en.classify.distil_bert.by_moritzlaurer").predict("""PUT YOUR STRING HERE""")

Model Information

Model Name: distilbert_sequence_classifier_policy_distilbert_7d
Compatibility: Spark NLP 4.1.0+
License: Open Source
Edition: Official
Input Labels: [document, token]
Output Labels: [ner]
Language: en
Size: 249.8 MB
Case sensitive: true
Max sentence length: 128

References

  • https://huggingface.co/MoritzLaurer/policy-distilbert-7d
  • https://manifesto-project.wzb.eu/down/data/2020b/codebooks/codebook_MPDataset_MPDS2020b.pdf
  • https://scikit-learn.org/stable/modules/generated/sklearn.metrics.precision_recall_fscore_support.html
  • https://manifesto-project.wzb.eu/information/documents/information
  • https://manifesto-project.wzb.eu/datasets
  • https://scikit-learn.org/stable/modules/generated/sklearn.metrics.balanced_accuracy_score.html