DeBERTa base model quantized

Description

The DeBERTa model was proposed in [[https://arxiv.org/abs/2006.03654 DeBERTa: Decoding-enhanced BERT with Disentangled Attention]] by Pengcheng He, Xiaodong Liu, Jianfeng Gao, Weizhu Chen It is based on Google’s BERT model released in 2018 and Facebook’s RoBERTa model released in 2019. Compared to RoBERTa-Large, a DeBERTa model trained on half of the training data performs consistently better on a wide range of NLP tasks, achieving improvements on MNLI by +0.9% (90.2% vs. 91.1%), on SQuAD v2.0 by +2.3% (88.4% vs. 90.7%) and RACE by +3.6% (83.2% vs. 86.8%).

Predicted Entities

Download Copy S3 URI

How to use

embeddings = DeBertaEmbeddings.pretrained("deberta_v3_base_quantized", "en") \
.setInputCols("sentence", "token") \
.setOutputCol("embeddings")
val embeddings = DeBertaEmbeddings.pretrained("deberta_v3_base_quantized", "en")
.setInputCols("sentence", "token")
.setOutputCol("embeddings")
import nlu
nlu.load("en.embed.deberta_v3_base").predict("""Put your text here.""")

Model Information

Model Name: deberta_v3_base_quantized
Compatibility: Spark NLP 5.0.0+
License: Open Source
Edition: Official
Input Labels: [token, sentence]
Output Labels: [embeddings]
Language: en
Size: 310.7 MB
Case sensitive: true
Max sentence length: 128

Benchmarking

Benchmarking