BERT Sentence Embeddings trained on Wikipedia and BooksCorpus and fine-tuned on QQP

Description

This model uses a BERT base architecture initialized from https://tfhub.dev/google/experts/bert/wiki_books/1 and fine-tuned on QQP. This is a BERT base architecture but some changes have been made to the original training and export scheme based on more recent learnings.

This model is intended to be used for a variety of English NLP tasks. The pre-training data contains more formal text and the model may not generalize to more colloquial text such as social media or messages.

This model is fine-tuned on the QQP and is recommended for use in semantic similarity of question pairs tasks. The fine-tuning task uses the Quora Question Pairs (QQP) dataset to predict whether two questions are duplicates or not.

Download Copy S3 URI

How to use

sent_embeddings = BertSentenceEmbeddings.pretrained("sent_bert_wiki_books_qqp", "en") \
.setInputCols("sentence") \
.setOutputCol("bert_sentence")

nlp_pipeline = Pipeline(stages=[document_assembler, sentence_detector, sent_embeddings ])
val sent_embeddings = BertSentenceEmbeddings.pretrained("sent_bert_wiki_books_qqp", "en")
.setInputCols("sentence")
.setOutputCol("bert_sentence")

val pipeline = new Pipeline().setStages(Array(document_assembler, sentence_detector, sent_embeddings ))
import nlu

text = ["I love NLP"]
sent_embeddings_df = nlu.load('en.embed_sentence.bert.wiki_books_qqp').predict(text, output_level='sentence')
sent_embeddings_df

Model Information

Model Name: sent_bert_wiki_books_qqp
Compatibility: Spark NLP 3.2.0+
License: Open Source
Edition: Official
Input Labels: [sentence]
Output Labels: [bert_sentence]
Language: en
Case sensitive: false

Data Source

This Model has been imported from: https://tfhub.dev/google/experts/bert/wiki_books/qqp/2