Description
Pretrained Question Answering model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP. Part_2_mBERT_Model_E1 is a English model originally trained by horsbug98.
How to use
documentAssembler = MultiDocumentAssembler() \
    .setInputCols(["question", "context"]) \
    .setOutputCols(["document_question", "document_context"])
spanClassifier = BertForQuestionAnswering.pretrained("bert_qa_part_2_mbert_model_e1","en") \
    .setInputCols(["document_question", "document_context"]) \
    .setOutputCol("answer")\
    .setCaseSensitive(True)
    
pipeline = Pipeline(stages=[documentAssembler, spanClassifier])
data = spark.createDataFrame([["What is my name?", "My name is Clara and I live in Berkeley."]]).toDF("question", "context")
result = pipeline.fit(data).transform(data)
val documentAssembler = new MultiDocumentAssembler() 
      .setInputCols(Array("question", "context")) 
      .setOutputCols(Array("document_question", "document_context"))
 
val spanClassifer = BertForQuestionAnswering.pretrained("bert_qa_part_2_mbert_model_e1","en") 
    .setInputCols(Array("document", "token")) 
    .setOutputCol("answer")
    .setCaseSensitive(true)
val pipeline = new Pipeline().setStages(Array(documentAssembler, spanClassifier))
val data = Seq("What is my name?", "My name is Clara and I live in Berkeley.").toDF("question", "context")
val result = pipeline.fit(data).transform(data)
import nlu
nlu.load("en.answer_question.bert.tydiqa.").predict("""What is my name?|||"My name is Clara and I live in Berkeley.""")
Model Information
| Model Name: | bert_qa_part_2_mbert_model_e1 | 
| Compatibility: | Spark NLP 4.0.0+ | 
| License: | Open Source | 
| Edition: | Official | 
| Input Labels: | [document_question, document_context] | 
| Output Labels: | [answer] | 
| Language: | en | 
| Size: | 665.7 MB | 
| Case sensitive: | true | 
| Max sentence length: | 512 | 
References
- https://huggingface.co/horsbug98/Part_2_mBERT_Model_E1