Description
Pretrained DistilBertForSequenceClassification, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.distilbert_base_uncased_finetuned_sanskrit_saskta_pipeline is a English model originally trained by datht.
How to use
pipeline = PretrainedPipeline("distilbert_base_uncased_finetuned_sanskrit_saskta_pipeline", lang = "en")
annotations = pipeline.transform(df)
val pipeline = new PretrainedPipeline("distilbert_base_uncased_finetuned_sanskrit_saskta_pipeline", lang = "en")
val annotations = pipeline.transform(df)
Model Information
| Model Name: | distilbert_base_uncased_finetuned_sanskrit_saskta_pipeline |
| Type: | pipeline |
| Compatibility: | Spark NLP 5.5.0+ |
| License: | Open Source |
| Edition: | Official |
| Language: | en |
| Size: | 249.5 MB |
References
https://huggingface.co/datht/distilbert-base-uncased-finetuned-SA
Included Models
- DocumentAssembler
- TokenizerModel
- DistilBertForSequenceClassification