Description
This is a dictionary-based lemmatizer that assigns all forms and inflections of a word to a single root. This enables the pipeline to treat the past and present tense of a verb, for example, as the same word instead of two completely different words.
Live Demo Open in Colab Download Copy S3 URI
How to use
document_assembler = DocumentAssembler() \
.setInputCol("text") \
.setOutputCol("document")
tokenizer = Tokenizer()\
.setInputCols(["document"]) \
.setOutputCol("token")
lemmatizer = LemmatizerModel.pretrained("lemma", "af") \
.setInputCols(["token"]) \
.setOutputCol("lemma")
nlp_pipeline = Pipeline(stages=[document_assembler, tokenizer, lemmatizer])
model = pipeline.fit(spark.createDataFrame([['']]).toDF("text"))
results = model.transform(["Ons het besliste teen-resessiebesteding deur die regering geïmplementeer , veral op infrastruktuur ."])
val document_assembler = DocumentAssembler()
.setInputCol("text")
.setOutputCol("document")
val tokenizer = Tokenizer()
.setInputCols("document")
.setOutputCol("token")
val lemmatizer = LemmatizerModel.pretrained("lemma", "af")
.setInputCols("token")
.setOutputCol("lemma")
val pipeline = new Pipeline().setStages(Array(document_assembler, tokenizer, lemmatizer))
val data = Seq("Ons het besliste teen-resessiebesteding deur die regering geïmplementeer , veral op infrastruktuur .").toDF("text")
val result = pipeline.fit(data).transform(data)
import nlu
text = ["Ons het besliste teen-resessiebesteding deur die regering geïmplementeer , veral op infrastruktuur ."]
lemma_df = nlu.load('af.lemma').predict(text, output_level = "document")
lemma_df.lemma.values[0]
Results
+--------------------+
| lemma|
+--------------------+
| ons|
| het|
| beslis|
|teen-resessiebest...|
| deur|
| die|
| regering|
| implementeer|
| ,|
| veral|
| op|
| infrastruktuur|
| .|
+--------------------+
Model Information
Model Name: | lemma |
Compatibility: | Spark NLP 2.7.5+ |
License: | Open Source |
Edition: | Official |
Input Labels: | [token] |
Output Labels: | [lemma] |
Language: | af |
Data Source
The model was trained on the Universal Dependencies version 2.7.
Benchmarking
Precision=0.81, Recall=0.78, F1-score=0.79