Description
This Indonesian Lemmatizer is an scalable, production-ready version of the Rule-based Lemmatizer available in Spacy Lookups Data repository.
How to use
documentAssembler = DocumentAssembler() \
.setInputCol("text") \
.setOutputCol("document")
tokenizer = Tokenizer() \
.setInputCols(["document"]) \
.setOutputCol("token")
lemmatizer = LemmatizerModel.pretrained("lemma_spacylookup","id") \
.setInputCols(["token"]) \
.setOutputCol("lemma")
pipeline = Pipeline(stages=[documentAssembler, tokenizer, lemmatizer])
example = spark.createDataFrame([["Anda tidak lebih baik dari saya"]], ["text"])
results = pipeline.fit(example).transform(example)
val documentAssembler = new DocumentAssembler()
.setInputCol("text")
.setOutputCol("document")
val tokenizer = new Tokenizer()
.setInputCols(Array("document"))
.setOutputCol("token")
val lemmatizer = LemmatizerModel.pretrained("lemma_spacylookup","id")
.setInputCols(Array("token"))
.setOutputCol("lemma")
val pipeline = new Pipeline().setStages(Array(documentAssembler, tokenizer, lemmatizer))
val data = Seq("Anda tidak lebih baik dari saya").toDF("text")
val results = pipeline.fit(data).transform(data)
import nlu
nlu.load("id.lemma.spacylookup").predict("""Anda tidak lebih baik dari saya""")
Results
+--------------------------------------+
|result |
+--------------------------------------+
|[Anda, tidak, lebih, baik, dari, saya]|
+--------------------------------------+
Model Information
Model Name: | lemma_spacylookup |
Compatibility: | Spark NLP 3.4.1+ |
License: | Open Source |
Edition: | Official |
Input Labels: | [token] |
Output Labels: | [lemma] |
Language: | id |
Size: | 370.9 KB |