Description
ELECTRA is a BERT-like model that is pre-trained as a discriminator in a set-up resembling a generative adversarial network (GAN). It was originally published by: Kevin Clark and Minh-Thang Luong and Quoc V. Le and Christopher D. Manning: ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators, ICLR 2020.
How to use
...
embeddings = BertSentenceEmbeddings.pretrained("sent_electra_small_uncased", "en") \
.setInputCols("sentence") \
.setOutputCol("sentence_embeddings")
nlp_pipeline = Pipeline(stages=[document_assembler, sentence_detector, embeddings])
pipeline_model = nlp_pipeline.fit(spark.createDataFrame([[""]]).toDF("text"))
result = pipeline_model.transform(spark.createDataFrame([['I hate cancer', "Antibiotics aren't painkiller"]], ["text"]))
...
val embeddings = BertSentenceEmbeddings.pretrained("sent_electra_small_uncased", "en")
.setInputCols("sentence")
.setOutputCol("sentence_embeddings")
val pipeline = new Pipeline().setStages(Array(document_assembler, sentence_detector, embeddings))
val data = Seq("I hate cancer, "Antibiotics aren't painkiller").toDF("text")
val result = pipeline.fit(data).transform(data)
import nlu
text = ["I hate cancer", "Antibiotics aren't painkiller"]
embeddings_df = nlu.load('en.embed_sentence.electra_small_uncased').predict(text, output_level='sentence')
embeddings_df
Results
sentence en_embed_sentence_electra_small_uncased_embeddings
I hate cancer [0.4288138449192047, -0.25909560918807983, -0....
Antibiotics aren't painkiller [0.04786013811826706, 0.14878112077713013, -0....
Model Information
Model Name: | sent_electra_small_uncased |
Type: | embeddings |
Compatibility: | Spark NLP 2.6.0+ |
License: | Open Source |
Edition: | Official |
Input Labels: | [sentence] |
Output Labels: | [sentence_embeddings] |
Language: | [en] |
Dimension: | 256 |
Case sensitive: | false |
Data Source
The model is imported from https://tfhub.dev/google/electra_small/2