English RobertaForQuestionAnswering Tiny Cased model (from deepset)

Description

Pretrained RobertaForQuestionAnswering model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP. tinyroberta-6l-768d is a English model originally trained by deepset.

Download Copy S3 URI

How to use

Document_Assembler = MultiDocumentAssembler()\
     .setInputCols(["question", "context"])\
     .setOutputCols(["document_question", "document_context"])

Question_Answering = RoBertaForQuestionAnswering.pretrained("roberta_qa_tiny_6l_768d","en")\
     .setInputCols(["document_question", "document_context"])\
     .setOutputCol("answer")\
     .setCaseSensitive(True)
    
pipeline = Pipeline(stages=[Document_Assembler, Question_Answering])

data = spark.createDataFrame([["What's my name?","My name is Clara and I live in Berkeley."]]).toDF("question", "context")

result = pipeline.fit(data).transform(data)
val Document_Assembler = new MultiDocumentAssembler()
     .setInputCols(Array("question", "context"))
     .setOutputCols(Array("document_question", "document_context"))

val Question_Answering = RoBertaForQuestionAnswering.pretrained("roberta_qa_tiny_6l_768d","en")
     .setInputCols(Array("document_question", "document_context"))
     .setOutputCol("answer")
     .setCaseSensitive(True)
    
val pipeline = new Pipeline().setStages(Array(Document_Assembler, Question_Answering))

val data = Seq("What's my name?","My name is Clara and I live in Berkeley.").toDS.toDF("question", "context")

val result = pipeline.fit(data).transform(data)
import nlu
nlu.load("en.answer_question.roberta.tiny_6l_768d").predict("""What's my name?|||"My name is Clara and I live in Berkeley.""")

Model Information

Model Name: roberta_qa_tiny_6l_768d
Compatibility: Spark NLP 4.2.4+
License: Open Source
Edition: Official
Input Labels: [document, token]
Output Labels: [class]
Language: en
Size: 307.2 MB
Case sensitive: true
Max sentence length: 256

References

  • https://huggingface.co/deepset/tinyroberta-6l-768d
  • https://arxiv.org/pdf/1909.10351.pdf
  • https://github.com/deepset-ai/haystack
  • https://haystack.deepset.ai/guides/model-distillation
  • https://github.com/deepset-ai/haystack/
  • https://workablehr.s3.amazonaws.com/uploads/account/logo/476306/logo
  • https://deepset.ai/german-bert
  • https://deepset.ai/germanquad
  • https://github.com/deepset-ai/FARM
  • https://github.com/deepset-ai/haystack/
  • https://twitter.com/deepset_ai
  • https://www.linkedin.com/company/deepset-ai/
  • https://haystack.deepset.ai/community/join
  • https://github.com/deepset-ai/haystack/discussions
  • https://deepset.ai
  • http://www.deepset.ai/jobs