English distilbert_base_uncased_distilled_clinc_feng_2052_pipeline pipeline DistilBertForSequenceClassification from feng-2052

Description

Pretrained DistilBertForSequenceClassification, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.distilbert_base_uncased_distilled_clinc_feng_2052_pipeline is a English model originally trained by feng-2052.

Download Copy S3 URI

How to use


pipeline = PretrainedPipeline("distilbert_base_uncased_distilled_clinc_feng_2052_pipeline", lang = "en")
annotations =  pipeline.transform(df)   


val pipeline = new PretrainedPipeline("distilbert_base_uncased_distilled_clinc_feng_2052_pipeline", lang = "en")
val annotations = pipeline.transform(df)

Model Information

Model Name: distilbert_base_uncased_distilled_clinc_feng_2052_pipeline
Type: pipeline
Compatibility: Spark NLP 5.5.1+
License: Open Source
Edition: Official
Language: en
Size: 249.9 MB

References

https://huggingface.co/feng-2052/distilbert-base-uncased-distilled-clinc

Included Models

  • DocumentAssembler
  • TokenizerModel
  • DistilBertForSequenceClassification