Description
Typed Dependency parser, trained on the on the CONLL dataset.
Dependency parsing is the task of extracting a dependency parse of a sentence that represents its grammatical structure and defines the relationships between “head” words and words, which modify those heads.
Live Demo Open in Colab Download Copy S3 URI
How to use
from sparknlp.pretrained import PretrainedPipeline
pipeline = PretrainedPipeline('dependency_parse', lang = 'en')
annotations = pipeline.fullAnnotate("Dependencies represents relationships betweens words in a Sentence "")[0]
annotations.keys()
val pipeline = new PretrainedPipeline("dependency_parse", lang = "en")
val result = pipeline.fullAnnotate("Dependencies represents relationships betweens words in a Sentence")(0)
nlu.load("dep.typed").predict("Dependencies represents relationships betweens words in a Sentence")
Results
+---------------------------------------------------------------------------------+--------------------------------------------------------+
|result |result |
+---------------------------------------------------------------------------------+--------------------------------------------------------+
|[ROOT, Dependencies, represents, words, relationships, Sentence, Sentence, words]|[root, parataxis, nsubj, amod, nsubj, case, nsubj, flat]|
+---------------------------------------------------------------------------------+--------------------------------------------------------+
Model Information
Model Name: | dependency_parse |
Type: | pipeline |
Compatibility: | Spark NLP 4.0.0+ |
License: | Open Source |
Edition: | Official |
Language: | en |
Size: | 24.1 MB |
Included Models
- DocumentAssembler
- SentenceDetector
- TokenizerModel
- PerceptronModel
- DependencyParserModel
- TypedDependencyParserModel