English distilbert_base_uncased_distilled_squad_finetuned_srh_v1_pipeline pipeline DistilBertForQuestionAnswering from allistair99

Description

Pretrained DistilBertForQuestionAnswering, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.distilbert_base_uncased_distilled_squad_finetuned_srh_v1_pipeline is a English model originally trained by allistair99.

Download Copy S3 URI

How to use


pipeline = PretrainedPipeline("distilbert_base_uncased_distilled_squad_finetuned_srh_v1_pipeline", lang = "en")
annotations =  pipeline.transform(df)   


val pipeline = new PretrainedPipeline("distilbert_base_uncased_distilled_squad_finetuned_srh_v1_pipeline", lang = "en")
val annotations = pipeline.transform(df)

Model Information

Model Name: distilbert_base_uncased_distilled_squad_finetuned_srh_v1_pipeline
Type: pipeline
Compatibility: Spark NLP 5.5.0+
License: Open Source
Edition: Official
Language: en
Size: 247.2 MB

References

https://huggingface.co/allistair99/distilbert-base-uncased-distilled-squad-finetuned-SRH-v1

Included Models

  • MultiDocumentAssembler
  • DistilBertForQuestionAnswering