Italian BERT Base Cased

Description

The source data for the Italian BERT model consists of a recent Wikipedia dump and various texts from the OPUS corpora collection. The final training corpus has a size of 13GB and 2,050,057,573 tokens.

Download Copy S3 URI

How to use

embeddings = BertEmbeddings.pretrained("bert_base_italian_cased", "it") \
      .setInputCols("sentence", "token") \
      .setOutputCol("embeddings")
nlp_pipeline = Pipeline(stages=[document_assembler, sentence_detector, tokenizer, embeddings])
val embeddings = BertEmbeddings.pretrained("bert_base_italian_cased", "it")
      .setInputCols("sentence", "token")
      .setOutputCol("embeddings")
val pipeline = new Pipeline().setStages(Array(document_assembler, sentence_detector, tokenizer, embeddings))
import nlu
nlu.load("it.embed.bert").predict("""Put your text here.""")

Model Information

Model Name: bert_base_italian_cased
Compatibility: Spark NLP 3.1.0+
License: Open Source
Edition: Official
Input Labels: [token, sentence]
Output Labels: [embeddings]
Language: it
Case sensitive: true

Data Source

https://huggingface.co/dbmdz/bert-base-italian-cased