Description
ALBERT is “A Lite” version of BERT, a popular unsupervised language representation learning algorithm. ALBERT uses parameter-reduction techniques that allow for large-scale configurations, overcome previous memory limitations, and achieve better behavior with respect to model degradation. The details are described in the paper “ALBERT: A Lite BERT for Self-supervised Learning of Language Representations.”
Predicted Entities
How to use
embeddings = AlbertEmbeddings.pretrained("albert_large_uncased_quantized", "en") \
.setInputCols("sentence", "token") \
.setOutputCol("embeddings")
val embeddings = AlbertEmbeddings.pretrained("albert_large_uncased_quantized", "en")
.setInputCols("sentence", "token")
.setOutputCol("embeddings")
import nlu
text = ["I love NLP"]
embeddings_df = nlu.load('en.embed.albert.large_uncased').predict(text, output_level='token')
embeddings_df
Model Information
Model Name: | albert_large_uncased_quantized |
Compatibility: | Spark NLP 5.0.2+ |
License: | Open Source |
Edition: | Official |
Input Labels: | [token, sentence] |
Output Labels: | [embeddings] |
Language: | en |
Size: | 71.4 MB |
Case sensitive: | false |