Description
This is a dictionary-based lemmatizer that assigns all forms and inflections of a word to a single root. This enables the pipeline to treat the past and present tense of a verb, for example, as the same word instead of two completely different words.
Live Demo Open in Colab Download Copy S3 URI
How to use
document_assembler = DocumentAssembler() \
.setInputCol("text") \
.setOutputCol("document")
tokenizer = Tokenizer()\
.setInputCols(["document"]) \
.setOutputCol("token")
lemmatizer = LemmatizerModel.pretrained("lemma", "vi") \
.setInputCols(["token"]) \
.setOutputCol("lemma")
pipeline = Pipeline(stages=[document_assembler, tokenizer, lemmatizer])
example = spark.createDataFrame([['Tất cả đều hồi hộp .']], ["text"])
results = pipeline.fit(example).transform(example)
val document_assembler = DocumentAssembler()
.setInputCol("text")
.setOutputCol("document")
val tokenizer = Tokenizer()
.setInputCols("document")
.setOutputCol("token")
val lemmatizer = LemmatizerModel.pretrained("lemma", "vi")
.setInputCols("token")
.setOutputCol("lemma")
val pipeline = new Pipeline().setStages(Array(document_assembler, tokenizer, lemmatizer))
val data = Seq("Tất cả đều hồi hộp .").toDF("text")
val result = pipeline.fit(data).transform(data)
import nlu
text = ["Tất cả đều hồi hộp ."]
lemma_df = nlu.load('vi.lemma').predict(text, output_level = "document")
lemma_df.lemma.values[0]
Results
+-----+
|lemma|
+-----+
| Tất|
| cả|
| đều|
| hồi|
| hộp|
| .|
+-----+
Model Information
Model Name: | lemma |
Compatibility: | Spark NLP 3.0.0+ |
License: | Open Source |
Edition: | Official |
Input Labels: | [token] |
Output Labels: | [lemma] |
Language: | vi |
Data Source
The model was trained on the Universal Dependencies version 2.7.
Benchmarking
Precision=0.96, Recall=0.89, F1-score=0.93
PREVIOUSTamil Lemmatizer