Description
A Part of Speech classifier predicts a grammatical label for every token in the input text. Implemented with an averaged perceptron architecture
.
Predicted Entities
- CCONJ
- ADV
- SCONJ
- DET
- NOUN
- VERB
- ADJ
- PUNCT
- AUX
- PRON
- PROPN
- NUM
- INTJ
- PART
- X
- ADP
- SYM
Live Demo Open in Colab Download Copy S3 URI
How to use
document_assembler = DocumentAssembler() \
.setInputCol("text") \
.setOutputCol("document")
sentence_detector = SentenceDetector() \
.setInputCols(["document"]) \
.setOutputCol("sentence")
pos = PerceptronModel.pretrained("pos_ud_kaist", "ko") \
.setInputCols(["document", "token"]) \
.setOutputCol("pos")
pipeline = Pipeline(stages=[
document_assembler,
sentence_detector,
posTagger
])
example = spark.createDataFrame([['John Snow Labs에서 안녕하세요! ']], ["text"])
result = pipeline.fit(example).transform(example)
val document_assembler = DocumentAssembler()
.setInputCol("text")
.setOutputCol("document")
val sentence_detector = SentenceDetector()
.setInputCols("document")
.setOutputCol("sentence")
val pos = PerceptronModel.pretrained("pos_ud_kaist", "ko")
.setInputCols(Array("document", "token"))
.setOutputCol("pos")
val pipeline = new Pipeline().setStages(Array(document_assembler, sentence_detector, pos))
val data = Seq("John Snow Labs에서 안녕하세요! ").toDF("text")
val result = pipeline.fit(data).transform(data)
import nlu
text = [""John Snow Labs에서 안녕하세요! ""]
token_df = nlu.load('ko.pos.ud_kaist').predict(text)
token_df
Results
token pos
0 J NOUN
1 o NOUN
2 h NOUN
3 n SCONJ
4 S X
5 n X
6 o X
7 w X
8 L X
9 a X
10 b X
11 s X
12 에 ADP
13 서 SCONJ
14 안 ADV
15 녕 VERB
16 하세요 VERB
17 ! PUNCT
Model Information
Model Name: | pos_ud_kaist |
Compatibility: | Spark NLP 3.0.0+ |
License: | Open Source |
Edition: | Official |
Input Labels: | [document, token] |
Output Labels: | [pos] |
Language: | ko |