Packages

class NerDLApproach extends AnnotatorApproach[NerDLModel] with NerApproach[NerDLApproach] with Logging with ParamsAndFeaturesWritable with EvaluationDLParams

This Named Entity recognition annotator allows to train generic NER model based on Neural Networks.

The architecture of the neural network is a Char CNNs - BiLSTM - CRF that achieves state-of-the-art in most datasets.

For instantiated/pretrained models, see NerDLModel.

The training data should be a labeled Spark Dataset, in the format of CoNLL 2003 IOB with Annotation type columns. The data should have columns of type DOCUMENT, TOKEN, WORD_EMBEDDINGS and an additional label column of annotator type NAMED_ENTITY. Excluding the label, this can be done with for example

Setting a test dataset to monitor model metrics can be done with .setTestDataset. The method expects a path to a parquet file containing a dataframe that has the same required columns as the training dataframe. The pre-processing steps for the training dataframe should also be applied to the test dataframe. The following example will show how to create the test dataset with a CoNLL dataset:

val documentAssembler = new DocumentAssembler()
  .setInputCol("text")
  .setOutputCol("document")

val embeddings = WordEmbeddingsModel
  .pretrained()
  .setInputCols("document", "token")
  .setOutputCol("embeddings")

val preProcessingPipeline = new Pipeline().setStages(Array(documentAssembler, embeddings))

val conll = CoNLL()
val Array(train, test) = conll
  .readDataset(spark, "src/test/resources/conll2003/eng.train")
  .randomSplit(Array(0.8, 0.2))

preProcessingPipeline
  .fit(test)
  .transform(test)
  .write
  .mode("overwrite")
  .parquet("test_data")

val nerTagger = new NerDLApproach()
  .setInputCols("document", "token", "embeddings")
  .setLabelColumn("label")
  .setOutputCol("ner")
  .setTestDataset("test_data")

For extended examples of usage, see the Examples and the NerDLSpec.

Example

import com.johnsnowlabs.nlp.base.DocumentAssembler
import com.johnsnowlabs.nlp.annotators.Tokenizer
import com.johnsnowlabs.nlp.annotators.sbd.pragmatic.SentenceDetector
import com.johnsnowlabs.nlp.embeddings.BertEmbeddings
import com.johnsnowlabs.nlp.annotators.ner.dl.NerDLApproach
import com.johnsnowlabs.nlp.training.CoNLL
import org.apache.spark.ml.Pipeline

// This CoNLL dataset already includes a sentence, token and label
// column with their respective annotator types. If a custom dataset is used,
// these need to be defined with for example:

val documentAssembler = new DocumentAssembler()
  .setInputCol("text")
  .setOutputCol("document")

val sentence = new SentenceDetector()
  .setInputCols("document")
  .setOutputCol("sentence")

val tokenizer = new Tokenizer()
  .setInputCols("sentence")
  .setOutputCol("token")

// Then the training can start
val embeddings = BertEmbeddings.pretrained()
  .setInputCols("sentence", "token")
  .setOutputCol("embeddings")

val nerTagger = new NerDLApproach()
  .setInputCols("sentence", "token", "embeddings")
  .setLabelColumn("label")
  .setOutputCol("ner")
  .setMaxEpochs(1)
  .setRandomSeed(0)
  .setVerbose(0)

val pipeline = new Pipeline().setStages(Array(
  embeddings,
  nerTagger
))

// We use the sentences, tokens and labels from the CoNLL dataset
val conll = CoNLL()
val trainingData = conll.readDataset(spark, "src/test/resources/conll2003/eng.train")

val pipelineModel = pipeline.fit(trainingData)
See also

NerCrfApproach for a generic CRF approach

NerConverter to further process the results

Linear Supertypes
EvaluationDLParams, ParamsAndFeaturesWritable, HasFeatures, Logging, NerApproach[NerDLApproach], AnnotatorApproach[NerDLModel], CanBeLazy, DefaultParamsWritable, MLWritable, HasOutputAnnotatorType, HasOutputAnnotationCol, HasInputAnnotationCols, Estimator[NerDLModel], PipelineStage, Logging, Params, Serializable, Serializable, Identifiable, AnyRef, Any
Ordering
  1. Grouped
  2. Alphabetic
  3. By Inheritance
Inherited
  1. NerDLApproach
  2. EvaluationDLParams
  3. ParamsAndFeaturesWritable
  4. HasFeatures
  5. Logging
  6. NerApproach
  7. AnnotatorApproach
  8. CanBeLazy
  9. DefaultParamsWritable
  10. MLWritable
  11. HasOutputAnnotatorType
  12. HasOutputAnnotationCol
  13. HasInputAnnotationCols
  14. Estimator
  15. PipelineStage
  16. Logging
  17. Params
  18. Serializable
  19. Serializable
  20. Identifiable
  21. AnyRef
  22. Any
  1. Hide All
  2. Show All
Visibility
  1. Public
  2. All

Instance Constructors

  1. new NerDLApproach()
  2. new NerDLApproach(uid: String)

    uid

    required uid for storing annotator to disk

Type Members

  1. type AnnotatorType = String
    Definition Classes
    HasOutputAnnotatorType

Value Members

  1. final def !=(arg0: Any): Boolean
    Definition Classes
    AnyRef → Any
  2. final def ##(): Int
    Definition Classes
    AnyRef → Any
  3. final def $[T](param: Param[T]): T
    Attributes
    protected
    Definition Classes
    Params
  4. def $$[T](feature: StructFeature[T]): T
    Attributes
    protected
    Definition Classes
    HasFeatures
  5. def $$[K, V](feature: MapFeature[K, V]): Map[K, V]
    Attributes
    protected
    Definition Classes
    HasFeatures
  6. def $$[T](feature: SetFeature[T]): Set[T]
    Attributes
    protected
    Definition Classes
    HasFeatures
  7. def $$[T](feature: ArrayFeature[T]): Array[T]
    Attributes
    protected
    Definition Classes
    HasFeatures
  8. final def ==(arg0: Any): Boolean
    Definition Classes
    AnyRef → Any
  9. def _fit(dataset: Dataset[_], recursiveStages: Option[PipelineModel]): NerDLModel
    Attributes
    protected
    Definition Classes
    AnnotatorApproach
  10. final def asInstanceOf[T0]: T0
    Definition Classes
    Any
  11. val batchSize: IntParam

    Batch size (Default: 8)

  12. def beforeTraining(spark: SparkSession): Unit
    Definition Classes
    NerDLApproachAnnotatorApproach
  13. val bestModelMetric: Param[String]

    Whether to check F1 Micro-average or F1 Macro-average as a final metric for the best model This will fall back to loss if there is no validation or test dataset

  14. def calculateEmbeddingsDim(sentences: Seq[WordpieceEmbeddingsSentence]): Int
  15. final def checkSchema(schema: StructType, inputAnnotatorType: String): Boolean
    Attributes
    protected
    Definition Classes
    HasInputAnnotationCols
  16. final def clear(param: Param[_]): NerDLApproach.this.type
    Definition Classes
    Params
  17. def clone(): AnyRef
    Attributes
    protected[lang]
    Definition Classes
    AnyRef
    Annotations
    @throws( ... ) @native()
  18. val configProtoBytes: IntArrayParam

    ConfigProto from tensorflow, serialized into byte array.

    ConfigProto from tensorflow, serialized into byte array. Get with config_proto.SerializeToString()

  19. final def copy(extra: ParamMap): Estimator[NerDLModel]
    Definition Classes
    AnnotatorApproach → Estimator → PipelineStage → Params
  20. def copyValues[T <: Params](to: T, extra: ParamMap): T
    Attributes
    protected
    Definition Classes
    Params
  21. final def defaultCopy[T <: Params](extra: ParamMap): T
    Attributes
    protected
    Definition Classes
    Params
  22. val description: String

    Trains Tensorflow based Char-CNN-BLSTM model

    Trains Tensorflow based Char-CNN-BLSTM model

    Definition Classes
    NerDLApproachAnnotatorApproach
  23. val dropout: FloatParam

    Dropout coefficient (Default: 0.5f)

  24. val enableMemoryOptimizer: BooleanParam

    Whether to optimize for large datasets or not (Default: false).

    Whether to optimize for large datasets or not (Default: false). Enabling this option can slow down training.

  25. val enableOutputLogs: BooleanParam

    Whether to output to annotators log folder (Default: false)

    Whether to output to annotators log folder (Default: false)

    Definition Classes
    EvaluationDLParams
  26. val entities: StringArrayParam

    Entities to recognize

    Entities to recognize

    Definition Classes
    NerApproach
  27. final def eq(arg0: AnyRef): Boolean
    Definition Classes
    AnyRef
  28. def equals(arg0: Any): Boolean
    Definition Classes
    AnyRef → Any
  29. val evaluationLogExtended: BooleanParam

    Whether logs for validation to be extended (Default: false): it displays time and evaluation of each label

    Whether logs for validation to be extended (Default: false): it displays time and evaluation of each label

    Definition Classes
    EvaluationDLParams
  30. def explainParam(param: Param[_]): String
    Definition Classes
    Params
  31. def explainParams(): String
    Definition Classes
    Params
  32. final def extractParamMap(): ParamMap
    Definition Classes
    Params
  33. final def extractParamMap(extra: ParamMap): ParamMap
    Definition Classes
    Params
  34. val features: ArrayBuffer[Feature[_, _, _]]
    Definition Classes
    HasFeatures
  35. def finalize(): Unit
    Attributes
    protected[lang]
    Definition Classes
    AnyRef
    Annotations
    @throws( classOf[java.lang.Throwable] )
  36. final def fit(dataset: Dataset[_]): NerDLModel
    Definition Classes
    AnnotatorApproach → Estimator
  37. def fit(dataset: Dataset[_], paramMaps: Seq[ParamMap]): Seq[NerDLModel]
    Definition Classes
    Estimator
    Annotations
    @Since( "2.0.0" )
  38. def fit(dataset: Dataset[_], paramMap: ParamMap): NerDLModel
    Definition Classes
    Estimator
    Annotations
    @Since( "2.0.0" )
  39. def fit(dataset: Dataset[_], firstParamPair: ParamPair[_], otherParamPairs: ParamPair[_]*): NerDLModel
    Definition Classes
    Estimator
    Annotations
    @Since( "2.0.0" ) @varargs()
  40. def get[T](feature: StructFeature[T]): Option[T]
    Attributes
    protected
    Definition Classes
    HasFeatures
  41. def get[K, V](feature: MapFeature[K, V]): Option[Map[K, V]]
    Attributes
    protected
    Definition Classes
    HasFeatures
  42. def get[T](feature: SetFeature[T]): Option[Set[T]]
    Attributes
    protected
    Definition Classes
    HasFeatures
  43. def get[T](feature: ArrayFeature[T]): Option[Array[T]]
    Attributes
    protected
    Definition Classes
    HasFeatures
  44. final def get[T](param: Param[T]): Option[T]
    Definition Classes
    Params
  45. def getBatchSize: Int

    Batch size

  46. def getBestModelMetric: String

  47. final def getClass(): Class[_]
    Definition Classes
    AnyRef → Any
    Annotations
    @native()
  48. def getConfigProtoBytes: Option[Array[Byte]]

    ConfigProto from tensorflow, serialized into byte array.

    ConfigProto from tensorflow, serialized into byte array. Get with config_proto.SerializeToString()

  49. final def getDefault[T](param: Param[T]): Option[T]
    Definition Classes
    Params
  50. def getDropout: Float

    Dropout coefficient

  51. def getEnableMemoryOptimizer: Boolean

    Memory Optimizer

  52. def getEnableOutputLogs: Boolean

    Whether to output to annotators log folder (Default: false)

    Whether to output to annotators log folder (Default: false)

    Definition Classes
    EvaluationDLParams
  53. def getInputCols: Array[String]

    returns

    input annotations columns currently used

    Definition Classes
    HasInputAnnotationCols
  54. def getLazyAnnotator: Boolean
    Definition Classes
    CanBeLazy
  55. def getLogName: String
    Definition Classes
    NerDLApproachLogging
  56. def getLr: Float

    Learning Rate

  57. def getMaxEpochs: Int

    Maximum number of epochs to train

    Maximum number of epochs to train

    Definition Classes
    NerApproach
  58. def getMinEpochs: Int

    Minimum number of epochs to train

    Minimum number of epochs to train

    Definition Classes
    NerApproach
  59. final def getOrDefault[T](param: Param[T]): T
    Definition Classes
    Params
  60. final def getOutputCol: String

    Gets annotation column name going to generate

    Gets annotation column name going to generate

    Definition Classes
    HasOutputAnnotationCol
  61. def getOutputLogsPath: String

    Folder path to save training logs (Default: "")

    Folder path to save training logs (Default: "")

    Definition Classes
    EvaluationDLParams
  62. def getParam(paramName: String): Param[Any]
    Definition Classes
    Params
  63. def getPo: Float

    Learning rate decay coefficient.

    Learning rate decay coefficient. Real Learning Rage = lr / (1 + po * epoch)

  64. def getRandomSeed: Int

    Random seed

    Random seed

    Definition Classes
    NerApproach
  65. def getUseBestModel: Boolean

    useBestModel

  66. def getUseContrib: Boolean

    Whether to use contrib LSTM Cells.

    Whether to use contrib LSTM Cells. Not compatible with Windows. Might slightly improve accuracy.

  67. def getValidationSplit: Float

    Choose the proportion of training dataset to be validated against the model on each Epoch (Default: 0.0f).

    Choose the proportion of training dataset to be validated against the model on each Epoch (Default: 0.0f). The value should be between 0.0 and 1.0 and by default it is 0.0 and off.

    Definition Classes
    EvaluationDLParams
  68. val graphFolder: Param[String]

    Folder path that contain external graph files

  69. final def hasDefault[T](param: Param[T]): Boolean
    Definition Classes
    Params
  70. def hasParam(paramName: String): Boolean
    Definition Classes
    Params
  71. def hashCode(): Int
    Definition Classes
    AnyRef → Any
    Annotations
    @native()
  72. val includeAllConfidenceScores: BooleanParam

    whether to include all confidence scores in annotation metadata or just score of the predicted tag

  73. val includeConfidence: BooleanParam

    Whether to include confidence scores in annotation metadata (Default: false)

  74. def initializeLogIfNecessary(isInterpreter: Boolean, silent: Boolean): Boolean
    Attributes
    protected
    Definition Classes
    Logging
  75. def initializeLogIfNecessary(isInterpreter: Boolean): Unit
    Attributes
    protected
    Definition Classes
    Logging
  76. val inputAnnotatorTypes: Array[String]

    Input annotator types: DOCUMENT, TOKEN, WORD_EMBEDDINGS

    Input annotator types: DOCUMENT, TOKEN, WORD_EMBEDDINGS

    Definition Classes
    NerDLApproachHasInputAnnotationCols
  77. final val inputCols: StringArrayParam

    columns that contain annotations necessary to run this annotator AnnotatorType is used both as input and output columns if not specified

    columns that contain annotations necessary to run this annotator AnnotatorType is used both as input and output columns if not specified

    Attributes
    protected
    Definition Classes
    HasInputAnnotationCols
  78. final def isDefined(param: Param[_]): Boolean
    Definition Classes
    Params
  79. final def isInstanceOf[T0]: Boolean
    Definition Classes
    Any
  80. final def isSet(param: Param[_]): Boolean
    Definition Classes
    Params
  81. def isTraceEnabled(): Boolean
    Attributes
    protected
    Definition Classes
    Logging
  82. val labelColumn: Param[String]

    Column with label per each token

    Column with label per each token

    Definition Classes
    NerApproach
  83. val lazyAnnotator: BooleanParam
    Definition Classes
    CanBeLazy
  84. def log(value: ⇒ String, minLevel: Level): Unit
    Attributes
    protected
    Definition Classes
    Logging
  85. def log: Logger
    Attributes
    protected
    Definition Classes
    Logging
  86. def logDebug(msg: ⇒ String, throwable: Throwable): Unit
    Attributes
    protected
    Definition Classes
    Logging
  87. def logDebug(msg: ⇒ String): Unit
    Attributes
    protected
    Definition Classes
    Logging
  88. def logError(msg: ⇒ String, throwable: Throwable): Unit
    Attributes
    protected
    Definition Classes
    Logging
  89. def logError(msg: ⇒ String): Unit
    Attributes
    protected
    Definition Classes
    Logging
  90. def logInfo(msg: ⇒ String, throwable: Throwable): Unit
    Attributes
    protected
    Definition Classes
    Logging
  91. def logInfo(msg: ⇒ String): Unit
    Attributes
    protected
    Definition Classes
    Logging
  92. def logName: String
    Attributes
    protected
    Definition Classes
    Logging
  93. def logTrace(msg: ⇒ String, throwable: Throwable): Unit
    Attributes
    protected
    Definition Classes
    Logging
  94. def logTrace(msg: ⇒ String): Unit
    Attributes
    protected
    Definition Classes
    Logging
  95. def logWarning(msg: ⇒ String, throwable: Throwable): Unit
    Attributes
    protected
    Definition Classes
    Logging
  96. def logWarning(msg: ⇒ String): Unit
    Attributes
    protected
    Definition Classes
    Logging
  97. val logger: Logger
    Attributes
    protected
    Definition Classes
    Logging
  98. val lr: FloatParam

    Learning Rate (Default: 1e-3f)

  99. val maxEpochs: IntParam

    Maximum number of epochs to train

    Maximum number of epochs to train

    Definition Classes
    NerApproach
  100. val minEpochs: IntParam

    Minimum number of epochs to train

    Minimum number of epochs to train

    Definition Classes
    NerApproach
  101. def msgHelper(schema: StructType): String
    Attributes
    protected
    Definition Classes
    HasInputAnnotationCols
  102. final def ne(arg0: AnyRef): Boolean
    Definition Classes
    AnyRef
  103. final def notify(): Unit
    Definition Classes
    AnyRef
    Annotations
    @native()
  104. final def notifyAll(): Unit
    Definition Classes
    AnyRef
    Annotations
    @native()
  105. def onTrained(model: NerDLModel, spark: SparkSession): Unit
    Definition Classes
    AnnotatorApproach
  106. def onWrite(path: String, spark: SparkSession): Unit
    Attributes
    protected
    Definition Classes
    ParamsAndFeaturesWritable
  107. val optionalInputAnnotatorTypes: Array[String]
    Definition Classes
    HasInputAnnotationCols
  108. val outputAnnotatorType: String

    Output annotator types: NAMED_ENTITY

    Output annotator types: NAMED_ENTITY

    Definition Classes
    NerDLApproachHasOutputAnnotatorType
  109. final val outputCol: Param[String]
    Attributes
    protected
    Definition Classes
    HasOutputAnnotationCol
  110. def outputLog(value: ⇒ String, uuid: String, shouldLog: Boolean, outputLogsPath: String): Unit
    Attributes
    protected
    Definition Classes
    Logging
  111. val outputLogsPath: Param[String]

    Folder path to save training logs (Default: "")

    Folder path to save training logs (Default: "")

    Definition Classes
    EvaluationDLParams
  112. lazy val params: Array[Param[_]]
    Definition Classes
    Params
  113. val po: FloatParam

    Learning rate decay coefficient (Default: 0.005f).

    Learning rate decay coefficient (Default: 0.005f). Real Learning Rate calculates to lr / (1 + po * epoch)

  114. val randomSeed: IntParam

    Random seed

    Random seed

    Definition Classes
    NerApproach
  115. def save(path: String): Unit
    Definition Classes
    MLWritable
    Annotations
    @Since( "1.6.0" ) @throws( ... )
  116. def set[T](feature: StructFeature[T], value: T): NerDLApproach.this.type
    Attributes
    protected
    Definition Classes
    HasFeatures
  117. def set[K, V](feature: MapFeature[K, V], value: Map[K, V]): NerDLApproach.this.type
    Attributes
    protected
    Definition Classes
    HasFeatures
  118. def set[T](feature: SetFeature[T], value: Set[T]): NerDLApproach.this.type
    Attributes
    protected
    Definition Classes
    HasFeatures
  119. def set[T](feature: ArrayFeature[T], value: Array[T]): NerDLApproach.this.type
    Attributes
    protected
    Definition Classes
    HasFeatures
  120. final def set(paramPair: ParamPair[_]): NerDLApproach.this.type
    Attributes
    protected
    Definition Classes
    Params
  121. final def set(param: String, value: Any): NerDLApproach.this.type
    Attributes
    protected
    Definition Classes
    Params
  122. final def set[T](param: Param[T], value: T): NerDLApproach.this.type
    Definition Classes
    Params
  123. def setBatchSize(batch: Int): NerDLApproach.this.type

    Batch size

  124. def setBestModelMetric(value: String): NerDLApproach.this.type

  125. def setConfigProtoBytes(bytes: Array[Int]): NerDLApproach.this.type

    ConfigProto from tensorflow, serialized into byte array.

    ConfigProto from tensorflow, serialized into byte array. Get with config_proto.SerializeToString()

  126. def setDefault[T](feature: StructFeature[T], value: () ⇒ T): NerDLApproach.this.type
    Attributes
    protected
    Definition Classes
    HasFeatures
  127. def setDefault[K, V](feature: MapFeature[K, V], value: () ⇒ Map[K, V]): NerDLApproach.this.type
    Attributes
    protected
    Definition Classes
    HasFeatures
  128. def setDefault[T](feature: SetFeature[T], value: () ⇒ Set[T]): NerDLApproach.this.type
    Attributes
    protected
    Definition Classes
    HasFeatures
  129. def setDefault[T](feature: ArrayFeature[T], value: () ⇒ Array[T]): NerDLApproach.this.type
    Attributes
    protected
    Definition Classes
    HasFeatures
  130. final def setDefault(paramPairs: ParamPair[_]*): NerDLApproach.this.type
    Attributes
    protected
    Definition Classes
    Params
  131. final def setDefault[T](param: Param[T], value: T): NerDLApproach.this.type
    Attributes
    protected[org.apache.spark.ml]
    Definition Classes
    Params
  132. def setDropout(dropout: Float): NerDLApproach.this.type

    Dropout coefficient

  133. def setEnableMemoryOptimizer(value: Boolean): NerDLApproach.this.type

    Whether to optimize for large datasets or not.

    Whether to optimize for large datasets or not. Enabling this option can slow down training.

  134. def setEnableOutputLogs(enableOutputLogs: Boolean): NerDLApproach.this.type

    Whether to output to annotators log folder (Default: false)

    Whether to output to annotators log folder (Default: false)

    Definition Classes
    EvaluationDLParams
  135. def setEntities(tags: Array[String]): NerDLApproach

    Entities to recognize

    Entities to recognize

    Definition Classes
    NerApproach
  136. def setEvaluationLogExtended(evaluationLogExtended: Boolean): NerDLApproach.this.type

    Whether logs for validation to be extended: it displays time and evaluation of each label.

    Whether logs for validation to be extended: it displays time and evaluation of each label. Default is false.

    Definition Classes
    EvaluationDLParams
  137. def setGraphFolder(path: String): NerDLApproach.this.type

    Folder path that contain external graph files

  138. def setIncludeAllConfidenceScores(value: Boolean): NerDLApproach.this.type

    whether to include confidence scores for all tags rather than just for the predicted one

  139. def setIncludeConfidence(value: Boolean): NerDLApproach.this.type

    Whether to include confidence scores in annotation metadata

  140. final def setInputCols(value: String*): NerDLApproach.this.type
    Definition Classes
    HasInputAnnotationCols
  141. def setInputCols(value: Array[String]): NerDLApproach.this.type

    Overrides required annotators column if different than default

    Overrides required annotators column if different than default

    Definition Classes
    HasInputAnnotationCols
  142. def setLabelColumn(column: String): NerDLApproach

    Column with label per each token

    Column with label per each token

    Definition Classes
    NerApproach
  143. def setLazyAnnotator(value: Boolean): NerDLApproach.this.type
    Definition Classes
    CanBeLazy
  144. def setLr(lr: Float): NerDLApproach.this.type

    Learning Rate

  145. def setMaxEpochs(epochs: Int): NerDLApproach

    Maximum number of epochs to train

    Maximum number of epochs to train

    Definition Classes
    NerApproach
  146. def setMinEpochs(epochs: Int): NerDLApproach

    Minimum number of epochs to train

    Minimum number of epochs to train

    Definition Classes
    NerApproach
  147. final def setOutputCol(value: String): NerDLApproach.this.type

    Overrides annotation column name when transforming

    Overrides annotation column name when transforming

    Definition Classes
    HasOutputAnnotationCol
  148. def setOutputLogsPath(path: String): NerDLApproach.this.type

    Folder path to save training logs (Default: "")

    Folder path to save training logs (Default: "")

    Definition Classes
    EvaluationDLParams
  149. def setPo(po: Float): NerDLApproach.this.type

    Learning rate decay coefficient.

    Learning rate decay coefficient. Real Learning Rage = lr / (1 + po * epoch)

  150. def setRandomSeed(seed: Int): NerDLApproach

    Random seed

    Random seed

    Definition Classes
    NerApproach
  151. def setTestDataset(er: ExternalResource): NerDLApproach.this.type

    ExternalResource to a parquet file of a test dataset.

    ExternalResource to a parquet file of a test dataset. If set, it is used to calculate statistics on it during training.

    When using an ExternalResource, only parquet files are accepted for this function.

    The parquet file must be a dataframe that has the same columns as the model that is being trained. For example, if the model needs as input DOCUMENT, TOKEN, WORD_EMBEDDINGS (Features) and NAMED_ENTITY (label) then these columns also need to be present while saving the dataframe. The pre-processing steps for the training dataframe should also be applied to the test dataframe.

    An example on how to create such a parquet file could be:

    // assuming preProcessingPipeline
    val Array(train, test) = data.randomSplit(Array(0.8, 0.2))
    
    preProcessingPipeline
      .fit(test)
      .transform(test)
      .write
      .mode("overwrite")
      .parquet("test_data")
    
    annotator.setTestDataset("test_data")
    Definition Classes
    EvaluationDLParams
  152. def setTestDataset(path: String, readAs: Format = ReadAs.SPARK, options: Map[String, String] = Map("format" -> "parquet")): NerDLApproach.this.type

    Path to a parquet file of a test dataset.

    Path to a parquet file of a test dataset. If set, it is used to calculate statistics on it during training.

    The parquet file must be a dataframe that has the same columns as the model that is being trained. For example, if the model needs as input DOCUMENT, TOKEN, WORD_EMBEDDINGS (Features) and NAMED_ENTITY (label) then these columns also need to be present while saving the dataframe. The pre-processing steps for the training dataframe should also be applied to the test dataframe.

    An example on how to create such a parquet file could be:

    // assuming preProcessingPipeline
    val Array(train, test) = data.randomSplit(Array(0.8, 0.2))
    
    preProcessingPipeline
      .fit(test)
      .transform(test)
      .write
      .mode("overwrite")
      .parquet("test_data")
    
    annotator.setTestDataset("test_data")
    Definition Classes
    EvaluationDLParams
  153. def setUseBestModel(value: Boolean): NerDLApproach.this.type

  154. def setUseContrib(value: Boolean): NerDLApproach.this.type

    Whether to use contrib LSTM Cells.

    Whether to use contrib LSTM Cells. Not compatible with Windows. Might slightly improve accuracy.

  155. def setValidationSplit(validationSplit: Float): NerDLApproach.this.type

    Choose the proportion of training dataset to be validated against the model on each Epoch (Default: 0.0f).

    Choose the proportion of training dataset to be validated against the model on each Epoch (Default: 0.0f). The value should be between 0.0 and 1.0 and by default it is 0.0 and off.

    Definition Classes
    EvaluationDLParams
  156. def setVerbose(verbose: Level): NerDLApproach.this.type

    Level of verbosity during training (Default: Verbose.Silent.id)

    Level of verbosity during training (Default: Verbose.Silent.id)

    Definition Classes
    EvaluationDLParams
  157. def setVerbose(verbose: Int): NerDLApproach.this.type

    Level of verbosity during training (Default: Verbose.Silent.id)

    Level of verbosity during training (Default: Verbose.Silent.id)

    Definition Classes
    EvaluationDLParams
  158. final def synchronized[T0](arg0: ⇒ T0): T0
    Definition Classes
    AnyRef
  159. val testDataset: ExternalResourceParam

    Path to a parquet file of a test dataset.

    Path to a parquet file of a test dataset. If set, it is used to calculate statistics on it during training.

    Definition Classes
    EvaluationDLParams
  160. def toString(): String
    Definition Classes
    Identifiable → AnyRef → Any
  161. def train(dataset: Dataset[_], recursivePipeline: Option[PipelineModel]): NerDLModel
    Definition Classes
    NerDLApproachAnnotatorApproach
  162. final def transformSchema(schema: StructType): StructType

    requirement for pipeline transformation validation.

    requirement for pipeline transformation validation. It is called on fit()

    Definition Classes
    AnnotatorApproach → PipelineStage
  163. def transformSchema(schema: StructType, logging: Boolean): StructType
    Attributes
    protected
    Definition Classes
    PipelineStage
    Annotations
    @DeveloperApi()
  164. val uid: String
    Definition Classes
    NerDLApproach → Identifiable
  165. val useBestModel: BooleanParam

    Whether to restore and use the model that has achieved the best performance at the end of the training.

    Whether to restore and use the model that has achieved the best performance at the end of the training. The metric that is being monitored is F1 for testDataset and if it's not set it will be validationSplit, and if it's not set finally looks for loss.

  166. val useContrib: BooleanParam

    Whether to use contrib LSTM Cells (Default: true).

    Whether to use contrib LSTM Cells (Default: true). Not compatible with Windows. Might slightly improve accuracy. This param is deprecated and only exists for backward compatibility

  167. def validate(schema: StructType): Boolean

    takes a Dataset and checks to see if all the required annotation types are present.

    takes a Dataset and checks to see if all the required annotation types are present.

    schema

    to be validated

    returns

    True if all the required types are present, else false

    Attributes
    protected
    Definition Classes
    AnnotatorApproach
  168. val validationSplit: FloatParam

    Choose the proportion of training dataset to be validated against the model on each Epoch (Default: 0.0f).

    Choose the proportion of training dataset to be validated against the model on each Epoch (Default: 0.0f). The value should be between 0.0 and 1.0 and by default it is 0.0 and off.

    Definition Classes
    EvaluationDLParams
  169. val verbose: IntParam

    Level of verbosity during training (Default: Verbose.Silent.id)

    Level of verbosity during training (Default: Verbose.Silent.id)

    Definition Classes
    EvaluationDLParams
  170. val verboseLevel: Level
    Definition Classes
    NerDLApproachLogging
  171. final def wait(): Unit
    Definition Classes
    AnyRef
    Annotations
    @throws( ... )
  172. final def wait(arg0: Long, arg1: Int): Unit
    Definition Classes
    AnyRef
    Annotations
    @throws( ... )
  173. final def wait(arg0: Long): Unit
    Definition Classes
    AnyRef
    Annotations
    @throws( ... ) @native()
  174. def write: MLWriter
    Definition Classes
    ParamsAndFeaturesWritable → DefaultParamsWritable → MLWritable

Inherited from EvaluationDLParams

Inherited from ParamsAndFeaturesWritable

Inherited from HasFeatures

Inherited from Logging

Inherited from NerApproach[NerDLApproach]

Inherited from AnnotatorApproach[NerDLModel]

Inherited from CanBeLazy

Inherited from DefaultParamsWritable

Inherited from MLWritable

Inherited from HasOutputAnnotatorType

Inherited from HasOutputAnnotationCol

Inherited from HasInputAnnotationCols

Inherited from Estimator[NerDLModel]

Inherited from PipelineStage

Inherited from Logging

Inherited from Params

Inherited from Serializable

Inherited from Serializable

Inherited from Identifiable

Inherited from AnyRef

Inherited from Any

Parameters

A list of (hyper-)parameter keys this annotator can take. Users can set and get the parameter values through setters and getters, respectively.

Annotator types

Required input and expected output annotator types

Members

Parameter setters

Parameter getters