Description
The explain_document_lg is a pretrained pipeline that we can use to process text with a simple pipeline that performs basic processing steps and recognizes entities . It performs most of the common text processing tasks on your dataframe
Live Demo Open in Colab Download Copy S3 URI
How to use
from sparknlp.pretrained import PretrainedPipelinein
pipeline = PretrainedPipeline('explain_document_lg', lang = 'pl')
annotations = pipeline.fullAnnotate(""Witaj z John Snow Labs! "")[0]
annotations.keys()
val pipeline = new PretrainedPipeline("explain_document_lg", lang = "pl")
val result = pipeline.fullAnnotate("Witaj z John Snow Labs! ")(0)
import nlu
text = [""Witaj z John Snow Labs! ""]
result_df = nlu.load('pl.explain.lg').predict(text)
result_df
Results
| | document | sentence | token | lemma | pos | embeddings | ner | entities |
|---:|:-----------------------------|:----------------------------|:----------------------------------------|:----------------------------------------|:-------------------------------------------|:-----------------------------|:--------------------------------------|:--------------------|
| 0 | ['Witaj z John Snow Labs! '] | ['Witaj z John Snow Labs!'] | ['Witaj', 'z', 'John', 'Snow', 'Labs!'] | ['witać', 'z', 'John', 'Snow', 'Labs!'] | ['VERB', 'ADP', 'PROPN', 'PROPN', 'PROPN'] | [[0.4977500140666961,.,...]] | ['O', 'O', 'B-PER', 'I-PER', 'I-PER'] | ['John Snow Labs!'] |
Model Information
Model Name: | explain_document_lg |
Type: | pipeline |
Compatibility: | Spark NLP 3.0.0+ |
License: | Open Source |
Edition: | Official |
Language: | pl |