Lemma UD model for Slavic (lemma_torot)

Description

Pretrained Lemmatizer model (lemma_torot) trained on Universal Dependencies 2.9 (UD_Slavic-TOROT) in Slavic language.

Open in Colab Download Copy S3 URI

How to use


document = DocumentAssembler()\ 
.setInputCol("text")\ 
.setOutputCol("document")

sentence = SentenceDetectorDLModel.pretrained("sentence_detector_dl", "xx")\ 
.setInputCols(["document"])\ 
.setOutputCol("sentence")

tokenizer = Tokenizer()\ 
.setInputCols(["sentence"])\ 
.setOutputCol("token") 

lemma = LemmatizerModel.pretrained("lemma_torot", "orv")\ 
.setInputCols(["token"])\
.setOutputCol("lemma")

pipeline = Pipeline(stages=[document, sentence, tokenizer, lemma])

data = spark.createDataFrame([["I love Spark NLP"]]).toDF("text")

result = pipeline.fit(data).transform(data)

val document = new DocumentAssembler()
.setInputCol("text")
.setOutputCol("document")

val sentence = SentenceDetectorDLModel.pretrained("sentence_detector_dl", "xx")
.setInputCols("document")
.setOutputCol("sentence")

val tokenizer = new Tokenizer() 
.setInputCols("sentence") 
.setOutputCol("token")

val lemma = LemmatizerModel.pretrained("lemma_torot", "orv")
.setInputCols("token")
.setOutputCol("lemma")

val pipeline = new Pipeline().setStages(Array(document, sentence, tokenizer, lemma))

val data = Seq("I love Spark NLP").toDF("text")

val result = pipeline.fit(data).transform(data)
import nlu
nlu.load("orv.lemma").predict("""I love Spark NLP""")

Model Information

Model Name: lemma_torot
Compatibility: Spark NLP 3.4.3+
License: Open Source
Edition: Official
Input Labels: [form]
Output Labels: [lemma]
Language: orv
Size: 427.2 KB

References

Model is trained on Universal Dependencies (treebank 2.9) UD_Slavic-TOROT https://github.com/UniversalDependencies/UD_Slavic-TOROT