Description
This is a dictionary-based lemmatizer that assigns all forms and inflections of a word to a single root. This enables the pipeline to treat the past and present tense of a verb, for example, as the same word instead of two completely different words.
Live Demo Open in Colab Download Copy S3 URI
How to use
document_assembler = DocumentAssembler() \
.setInputCol("text") \
.setOutputCol("document")
tokenizer = Tokenizer()\
.setInputCols(["document"]) \
.setOutputCol("token")
lemmatizer = LemmatizerModel.pretrained("lemma", "cy") \
.setInputCols(["token"]) \
.setOutputCol("lemma")
pipeline = Pipeline(stages=[document_assembler, tokenizer, lemmatizer])
example = spark.createDataFrame([["Dywedir yn aml taw rygbi 'r undeb yw mabolgamp genedlaethol Cymru , er mae pêl-droed yn denu mwy o wylwyr i 'r maes ."]], ["text"])
results = pipeline.fit(example).transform(example)
val document_assembler = DocumentAssembler()
.setInputCol("text")
.setOutputCol("document")
val tokenizer = Tokenizer()
.setInputCols("document")
.setOutputCol("token")
val lemmatizer = LemmatizerModel.pretrained("lemma", "cy")
.setInputCols("token")
.setOutputCol("lemma")
val pipeline = new Pipeline().setStages(Array(document_assembler, tokenizer, lemmatizer))
val data = Seq("Dywedir yn aml taw rygbi "r undeb yw mabolgamp genedlaethol Cymru , er mae pêl-droed yn denu mwy o wylwyr i "r maes .").toDF("text")
val result = pipeline.fit(data).transform(data)
import nlu
text = ["Dywedir yn aml taw rygbi 'r undeb yw mabolgamp genedlaethol Cymru , er mae pêl-droed yn denu mwy o wylwyr i 'r maes ."]
lemma_df = nlu.load('cy.lemma').predict(text, output_level = "document")
lemma_df.lemma.values[0]
Results
+------------+
| lemma|
+------------+
| Dywedir|
| yn|
| aml|
| taw|
| rygbi|
| '|
| r|
| undeb|
| bod|
| mabolgamp|
|cenedlaethol|
| Cymru|
| ,|
| er|
| bod|
| pêl-droed|
| yn|
| denu|
| mawr|
| o|
+------------+
only showing top 20 rows
Model Information
Model Name: | lemma |
Compatibility: | Spark NLP 3.0.0+ |
License: | Open Source |
Edition: | Official |
Input Labels: | [token] |
Output Labels: | [lemma] |
Language: | cy |
Data Source
The model was trained on the Universal Dependencies version 2.7.
Benchmarking
Precision=0.74, Recall=0.71, F1-score=0.72
PREVIOUSAfrikaans Lemmatizer