Description
Pretrained T5ForConditionalGeneration model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP. t5-efficient-small-nl24 is a English model originally trained by google.
How to use
documentAssembler = DocumentAssembler() \
    .setInputCols("text") \
    .setOutputCols("document")
t5 = T5Transformer.pretrained("t5_efficient_small_nl24","en") \
    .setInputCols("document") \
    .setOutputCol("answers")
    
pipeline = Pipeline(stages=[documentAssembler, t5])
data = spark.createDataFrame([["PUT YOUR STRING HERE"]]).toDF("text")
result = pipeline.fit(data).transform(data)
val documentAssembler = new DocumentAssembler() 
      .setInputCols("text")
      .setOutputCols("document")
       
val t5 = T5Transformer.pretrained("t5_efficient_small_nl24","en") 
    .setInputCols("document")
    .setOutputCol("answers")
   
val pipeline = new Pipeline().setStages(Array(documentAssembler, t5))
val data = Seq("PUT YOUR STRING HERE").toDS.toDF("text")
val result = pipeline.fit(data).transform(data)
Model Information
| Model Name: | t5_efficient_small_nl24 | 
| Compatibility: | Spark NLP 4.3.0+ | 
| License: | Open Source | 
| Edition: | Official | 
| Input Labels: | [documents] | 
| Output Labels: | [t5] | 
| Language: | en | 
| Size: | 402.2 MB | 
References
- https://huggingface.co/google/t5-efficient-small-nl24
- https://ai.googleblog.com/2020/02/exploring-transfer-learning-with-t5.html
- https://arxiv.org/abs/2109.10686
- https://arxiv.org/abs/2109.10686
- https://github.com/google-research/google-research/issues/986#issuecomment-1035051145