sparknlp.annotator.ner.ner_dl_graph_checker#

Contains classes for NerDL.

Module Contents#

Classes#

NerDLGraphChecker

Checks whether a suitable NerDL graph is available for the given training dataset, before any

NerDLGraphCheckerModel

Resulting model from NerDLGraphChecker, that does not perform any transformations, as the

class NerDLGraphChecker[source]#

Checks whether a suitable NerDL graph is available for the given training dataset, before any computations/training is done. This annotator is useful for custom training cases, where specialized graphs are needed.

Important: This annotator should be used or positioned before any embedding or NerDLApproach annotators in the pipeline and will process the whole dataset to extract the required graph parameters.

This annotator requires a dataset with at least two columns: one with tokens and one with the labels. In addition, it requires the used embedding annotator in the pipeline to extract the suitable embedding dimension.

For extended examples of usage, see the`Examples <JohnSnowLabs/spark-nlp>`__ and the NerDLGraphCheckerTestSpec.

Input Annotation types

Output Annotation type

DOCUMENT, TOKEN

NONE

Parameters:
inputCols

Column names of input annotations

labelColumn

Column name for data labels

embeddingsDim

Dimensionality of embeddings

Examples

>>> import sparknlp
>>> from sparknlp.base import *
>>> from sparknlp.annotator import *
>>> from pyspark.ml import Pipeline

This CoNLL dataset already includes a sentence, token and label column with their respective annotator types. If a custom dataset is used, these need to be defined with for example:

>>> conll = CoNLL()
>>> trainingData = conll.readDataset(spark, "src/test/resources/conll2003/eng.train")
>>> embeddings = BertEmbeddings \
...     .pretrained() \
...     .setInputCols(["sentence", "token"]) \
...     .setOutputCol("embeddings")

This annotatorr requires the data for NerDLApproach graphs: text, tokens, labels and the embedding model

>>> nerDLGraphChecker = NerDLGraphChecker() \
...     .setInputCols(["sentence", "token"]) \
...     .setLabelColumn("label") \
...     .setEmbeddingsModel(embeddings)
>>> nerTagger = NerDLApproach() \
...     .setInputCols(["sentence", "token", "embeddings"]) \
...     .setLabelColumn("label") \
...     .setOutputCol("ner") \
...     .setMaxEpochs(1) \
...     .setRandomSeed(0) \
...     .setVerbose(0)
>>> pipeline = Pipeline().setStages([nerDLGraphChecker, embeddings, nerTagger])

If we now fit the model with a graph missing, then an exception is raised.

>>> pipelineModel = pipeline.fit(trainingData)
inputCols[source]#
labelColumn[source]#
embeddingsDim[source]#
inputAnnotatorTypes[source]#
graphFolder[source]#
setInputCols(*value)[source]#

Sets column names of input annotations.

Parameters:
*valueList[str]

Input columns for the annotator

setLabelColumn(value)[source]#

Sets name of column for data labels.

Parameters:
valuestr

Column for data labels

setEmbeddingsDim(value: int)[source]#

Sets Dimensionality of embeddings

Parameters:
valueint

Dimensionality of embeddings

setEmbeddingsModel(model: sparknlp.common.HasEmbeddingsProperties)[source]#

Get embeddingsDim from a given embeddings model, if possible. Falls back to setEmbeddingsDim if dimension cannot be obtained automatically.

setGraphFolder(p)[source]#

Sets folder path that contain external graph files.

Parameters:
pstr

Folder path that contain external graph files

class NerDLGraphCheckerModel(classname='com.johnsnowlabs.nlp.annotators.ner.dl.NerDLGraphCheckerModel', java_model=None)[source]#

Resulting model from NerDLGraphChecker, that does not perform any transformations, as the checks are done during the fit phase. It acts as the identity.

This annotator should never be used directly.

inputAnnotatorTypes[source]#