Description
This is a scalable, production-ready Stopwords Remover model trained using the corpus available at stopwords-iso.
How to use
documentAssembler = DocumentAssembler() \
.setInputCol("text") \
.setOutputCol("document")
tokenizer = Tokenizer() \
.setInputCols(["document"]) \
.setOutputCol("token")
stop_words = StopWordsCleaner.pretrained("stopwords_iso","si") \
.setInputCols(["token"]) \
.setOutputCol("cleanTokens")
pipeline = Pipeline(stages=[documentAssembler, tokenizer, stop_words])
example = spark.createDataFrame([["ඔබ මට වඩා හොඳ නැත"]], ["text"])
results = pipeline.fit(example).transform(example)
val documentAssembler = new DocumentAssembler()
.setInputCol("text")
.setOutputCol("document")
val stop_words = new Tokenizer()
.setInputCols(Array("document"))
.setOutputCol("token")
val lemmatizer = StopWordsCleaner.pretrained("stopwords_iso","si")
.setInputCols(Array("token"))
.setOutputCol("cleanTokens")
val pipeline = new Pipeline().setStages(Array(documentAssembler, tokenizer, stop_words))
val data = Seq("ඔබ මට වඩා හොඳ නැත").toDF("text")
val results = pipeline.fit(data).transform(data)
import nlu
nlu.load("si.stopwords").predict("""ඔබ මට වඩා හොඳ නැත""")
Results
+------------------+
|result |
+------------------+
|[ඔබ, මට, හොඳ, නැත]|
+------------------+
Model Information
Model Name: | stopwords_iso |
Compatibility: | Spark NLP 3.4.1+ |
License: | Open Source |
Edition: | Official |
Input Labels: | [token] |
Output Labels: | [cleanTokens] |
Language: | si |
Size: | 2.2 KB |