Multilingual distilbert_base_multilingual_cased_bulgarian_wikipedia DistilBertEmbeddings from mor40

Description

Pretrained DistilBertEmbeddings model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.distilbert_base_multilingual_cased_bulgarian_wikipedia is a Multilingual model originally trained by mor40.

Download Copy S3 URI

How to use



document_assembler = DocumentAssembler() \
    .setInputCol("text") \
    .setOutputCol("documents")
    
    
embeddings =DistilBertEmbeddings.pretrained("distilbert_base_multilingual_cased_bulgarian_wikipedia","xx") \
            .setInputCols(["documents","token"]) \
            .setOutputCol("embeddings")

pipeline = Pipeline().setStages([document_assembler, embeddings])

pipelineModel = pipeline.fit(data)

pipelineDF = pipelineModel.transform(data)



val document_assembler = new DocumentAssembler()
    .setInputCol("text") 
    .setOutputCol("embeddings")
    
val embeddings = DistilBertEmbeddings 
    .pretrained("distilbert_base_multilingual_cased_bulgarian_wikipedia", "xx")
    .setInputCols(Array("documents","token")) 
    .setOutputCol("embeddings") 

val pipeline = new Pipeline().setStages(Array(document_assembler, embeddings))

val pipelineModel = pipeline.fit(data)

val pipelineDF = pipelineModel.transform(data)


Model Information

Model Name: distilbert_base_multilingual_cased_bulgarian_wikipedia
Compatibility: Spark NLP 5.1.2+
License: Open Source
Edition: Official
Input Labels: [documents, token]
Output Labels: [embeddings]
Language: xx
Size: 505.3 MB

References

https://huggingface.co/mor40/distilbert-base-multilingual-cased-bg-wikipedia