Description
Pretrained BertEmbeddings model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.bert_large_cased_whole_word_masking
is a English model originally trained by huggingface.
Predicted Entities
How to use
document_assembler = DocumentAssembler() \
.setInputCol("text") \
.setOutputCol("documents")
embeddings =BertEmbeddings.pretrained("bert_large_cased_whole_word_masking","en") \
.setInputCols(["documents","token"]) \
.setOutputCol("embeddings")
pipeline = Pipeline().setStages([document_assembler, embeddings])
pipelineModel = pipeline.fit(data)
pipelineDF = pipelineModel.transform(data)
val document_assembler = new DocumentAssembler()
.setInputCol("text")
.setOutputCol("embeddings")
val embeddings = BertEmbeddings
.pretrained("bert_large_cased_whole_word_masking", "en")
.setInputCols(Array("documents","token"))
.setOutputCol("embeddings")
val pipeline = new Pipeline().setStages(Array(document_assembler, embeddings))
val pipelineModel = pipeline.fit(data)
val pipelineDF = pipelineModel.transform(data)
Model Information
Model Name: | bert_large_cased_whole_word_masking |
Compatibility: | Spark NLP 5.5.0+ |
License: | Open Source |
Edition: | Official |
Input Labels: | [document, token] |
Output Labels: | [bert] |
Language: | en |
Size: | 1.2 GB |
References
References
https://huggingface.co/bert-large-cased-whole-word-masking