Part of Speech for German

Description

A Part of Speech classifier predicts a grammatical label for every token in the input text. Implemented with an averaged perceptron architecture.

Predicted Entities

  • ADP
  • DET
  • ADJ
  • NOUN
  • VERB
  • PRON
  • PROPN
  • X
  • PUNCT
  • CCONJ
  • NUM
  • ADV
  • AUX
  • SCONJ
  • PART
  • INTJ

Live Demo Open in Colab Download Copy S3 URI

How to use

document_assembler = DocumentAssembler() \
.setInputCol("text") \
.setOutputCol("document")

sentence_detector = SentenceDetector() \
.setInputCols(["document"]) \
.setOutputCol("sentence")

tokenizer = Tokenizer() \
.setInputCols("sentence") \
.setOutputCol("token")

posTagger = PerceptronModel.pretrained("pos_ud_hdt", "de") \
.setInputCols(["document", "token"]) \
.setOutputCol("pos")

pipeline = Pipeline(stages=[
document_assembler,
sentence_detector,
tokenizer,
posTagger
])

data = spark.createDataFrame([["Hallo aus John Snow Labs! "]], ["text"])

result = pipeline.fit(data).transform(data)

val document_assembler = DocumentAssembler()
.setInputCol("text")
.setOutputCol("document")

val sentence_detector = SentenceDetector()
.setInputCols("document")
.setOutputCol("sentence")

val pos = PerceptronModel.pretrained("pos_ud_hdt", "de")
.setInputCols(Array("document", "token"))
.setOutputCol("pos")

val pipeline = new Pipeline().setStages(Array(document_assembler, sentence_detector, pos))

val data = Seq("Hallo aus John Snow Labs! ").toDF("text")
val result = pipeline.fit(data).transform(data)


import nlu
text = ["Hallo aus John Snow Labs!"]
token_df = nlu.load('de.pos').predict(text)
token_df

Results

token    pos

0  Hallo   NOUN
1    aus    ADP
2   John  PROPN
3   Snow  PROPN
4   Labs  PROPN
5      !  PUNCT

Model Information

Model Name: pos_ud_hdt
Compatibility: Spark NLP 3.0.0+
License: Open Source
Edition: Official
Input Labels: [document, token]
Output Labels: [pos]
Language: de