Description
The DeBERTa model was proposed in [[https://arxiv.org/abs/2006.03654 DeBERTa: Decoding-enhanced BERT with Disentangled Attention]] by Pengcheng He, Xiaodong Liu, Jianfeng Gao, Weizhu Chen It is based on Google’s BERT model released in 2018 and Facebook’s RoBERTa model released in 2019. Compared to RoBERTa-Large, a DeBERTa model trained on half of the training data performs consistently better on a wide range of NLP tasks, achieving improvements on MNLI by +0.9% (90.2% vs. 91.1%), on SQuAD v2.0 by +2.3% (88.4% vs. 90.7%) and RACE by +3.6% (83.2% vs. 86.8%).
Predicted Entities
How to use
embeddings = DeBertaEmbeddings.pretrained("deberta_v3_base", "en") \
.setInputCols("sentence", "token") \
.setOutputCol("embeddings")
val embeddings = DeBertaEmbeddings.pretrained("deberta_v3_base", "en")
.setInputCols("sentence", "token")
.setOutputCol("embeddings")
import nlu
nlu.load("en.embed.deberta_v3_base").predict("""Put your text here.""")
Model Information
Model Name: | deberta_v3_base |
Compatibility: | Spark NLP 3.4.2+ |
License: | Open Source |
Edition: | Official |
Input Labels: | [token, sentence] |
Output Labels: | [embeddings] |
Language: | en |
Size: | 436.4 MB |
Case sensitive: | true |
Max sentence length: | 128 |
References
https://huggingface.co/microsoft/deberta-v3-base
Benchmarking
#### Fine-tuning on NLU tasks
dev results on SQuAD 2.0 and MNLI tasks.
| Model |Vocabulary(K)|Backbone #Params(M)| SQuAD 2.0(F1/EM) | MNLI-m/mm(ACC)|
|-------------------|----------|-------------------|-----------|----------|
| RoBERTa-base |50 |86 | 83.7/80.5 | 87.6/- |
| XLNet-base |32 |92 | -/80.2 | 86.8/- |
| ELECTRA-base |30 |86 | -/80.5 | 88.8/ |
| DeBERTa-base |50 |100 | 86.2/83.1| 88.8/88.5|
| DeBERTa-v3-base |128|86 | **88.4/85.4** | **90.6/90.7**|
| DeBERTa-v3-base + SiFT |128|86 | -/- | 91.0/-|