English DistilBertForQuestionAnswering Base Uncased model (from vitusya)

Description

Pretrained DistilBertForQuestionAnswering model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP. distilbert-base-uncased-finetuned-squad is a English model originally trained by vitusya.

Download Copy S3 URI

How to use

Document_Assembler = MultiDocumentAssembler()\
     .setInputCols(["question", "context"])\
     .setOutputCols(["document_question", "document_context"])

Question_Answering = DistilBertForQuestionAnswering.pretrained("distilbert_qa_vitusya_base_uncased_finetuned_squad","en")\
     .setInputCols(["document_question", "document_context"])\
     .setOutputCol("answer")\
     .setCaseSensitive(True)
    
pipeline = Pipeline(stages=[Document_Assembler, Question_Answering])

data = spark.createDataFrame([["What's my name?","My name is Clara and I live in Berkeley."]]).toDF("question", "context")

result = pipeline.fit(data).transform(data)
val Document_Assembler = new MultiDocumentAssembler()
     .setInputCols(Array("question", "context"))
     .setOutputCols(Array("document_question", "document_context"))

val Question_Answering = DistilBertForQuestionAnswering.pretrained("distilbert_qa_vitusya_base_uncased_finetuned_squad","en")
     .setInputCols(Array("document_question", "document_context"))
     .setOutputCol("answer")
     .setCaseSensitive(true)
    
val pipeline = new Pipeline().setStages(Array(Document_Assembler, Question_Answering))

val data = Seq("What's my name?","My name is Clara and I live in Berkeley.").toDS.toDF("question", "context")

val result = pipeline.fit(data).transform(data)
import nlu
nlu.load("en.answer_question.squad.distil_bert.base_uncased.by_vitusya").predict("""What's my name?|||"My name is Clara and I live in Berkeley.""")

Model Information

Model Name: distilbert_qa_vitusya_base_uncased_finetuned_squad
Compatibility: Spark NLP 4.3.0+
License: Open Source
Edition: Official
Input Labels: [document_question, document_context]
Output Labels: [answer]
Language: en
Size: 247.6 MB
Case sensitive: false
Max sentence length: 512

References

  • https://huggingface.co/vitusya/distilbert-base-uncased-finetuned-squad