Description
ELECTRA is a BERT-like model that is pre-trained as a discriminator in a set-up resembling a generative adversarial network (GAN). It was originally published by: Kevin Clark and Minh-Thang Luong and Quoc V. Le and Christopher D. Manning: ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators, ICLR 2020.
How to use
...
embeddings = BertEmbeddings.pretrained("electra_base_uncased", "en") \
.setInputCols("sentence", "token") \
.setOutputCol("embeddings")
nlp_pipeline = Pipeline(stages=[document_assembler, sentence_detector, tokenizer, embeddings])
pipeline_model = nlp_pipeline.fit(spark.createDataFrame([[""]]).toDF("text"))
result = pipeline_model.transform(spark.createDataFrame([['I love NLP']], ["text"]))
...
val embeddings = BertEmbeddings.pretrained("electra_base_uncased", "en")
.setInputCols("sentence", "token")
.setOutputCol("embeddings")
val pipeline = new Pipeline().setStages(Array(document_assembler, sentence_detector, tokenizer, embeddings))
val data = Seq("I love NLP").toDF("text")
val result = pipeline.fit(data).transform(data)
import nlu
text = ["I love NLP"]
embeddings_df = nlu.load('en.embed.electra.base_uncased').predict(text, output_level='token')
embeddings_df
Results
token en_embed_electra_base_uncased_embeddings
I [-0.5244714021682739, -0.0994749441742897, 0.2...
love [-0.14990234375, -0.45483139157295227, 0.28477...
NLP [-0.030217083171010017, -0.43060103058815, -0....
Model Information
Model Name: | electra_base_uncased |
Type: | embeddings |
Compatibility: | Spark NLP 2.6.0+ |
License: | Open Source |
Edition: | Official |
Input Labels: | [sentence, token] |
Output Labels: | [word_embeddings] |
Language: | [en] |
Dimension: | 768 |
Case sensitive: | false |
Data Source
The model is imported from https://tfhub.dev/google/electra_base/2