Description
Pretrained RoBertaForQuestionAnswering model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.babyberta_wikipedia1_2_5_with_masking_run2_finetuned_qasrl is a English model originally trained by lielbin.
How to use
             
documentAssembler = MultiDocumentAssembler() \
     .setInputCol(["question", "context"]) \
     .setOutputCol(["document_question", "document_context"])
    
spanClassifier = RoBertaForQuestionAnswering.pretrained("babyberta_wikipedia1_2_5_with_masking_run2_finetuned_qasrl","en") \
     .setInputCols(["document_question","document_context"]) \
     .setOutputCol("answer")
pipeline = Pipeline().setStages([documentAssembler, spanClassifier])
data = spark.createDataFrame([["What framework do I use?","I use spark-nlp."]]).toDF("document_question", "document_context")
pipelineModel = pipeline.fit(data)
pipelineDF = pipelineModel.transform(data)
val documentAssembler = new MultiDocumentAssembler()
    .setInputCol(Array("question", "context")) 
    .setOutputCol(Array("document_question", "document_context"))
    
val spanClassifier = RoBertaForQuestionAnswering.pretrained("babyberta_wikipedia1_2_5_with_masking_run2_finetuned_qasrl", "en")
    .setInputCols(Array("document_question","document_context")) 
    .setOutputCol("answer") 
    
val pipeline = new Pipeline().setStages(Array(documentAssembler, spanClassifier))
val data = Seq("What framework do I use?","I use spark-nlp.").toDS.toDF("document_question", "document_context")
val pipelineModel = pipeline.fit(data)
val pipelineDF = pipelineModel.transform(data)
Model Information
| Model Name: | babyberta_wikipedia1_2_5_with_masking_run2_finetuned_qasrl | 
| Compatibility: | Spark NLP 5.5.0+ | 
| License: | Open Source | 
| Edition: | Official | 
| Input Labels: | [document_question, document_context] | 
| Output Labels: | [answer] | 
| Language: | en | 
| Size: | 32.0 MB | 
References
https://huggingface.co/lielbin/BabyBERTa-wikipedia1_2.5-with-Masking_run2-finetuned-QASRL