Packages

c

com.johnsnowlabs.nlp.annotators.spell.context

ContextSpellCheckerApproach

class ContextSpellCheckerApproach extends AnnotatorApproach[ContextSpellCheckerModel] with HasFeatures with WeightedLevenshtein

Trains a deep-learning based Noisy Channel Model Spell Algorithm. Correction candidates are extracted combining context information and word information.

For instantiated/pretrained models, see ContextSpellCheckerModel.

Spell Checking is a sequence to sequence mapping problem. Given an input sequence, potentially containing a certain number of errors, ContextSpellChecker will rank correction sequences according to three things:

  1. Different correction candidates for each word — word level.
  2. The surrounding text of each word, i.e. it’s context — sentence level.
  3. The relative cost of different correction candidates according to the edit operations at the character level it requires — subword level.

For an in-depth explanation of the module see the article Applying Context Aware Spell Checking in Spark NLP.

For extended examples of usage, see the article Training a Contextual Spell Checker for Italian Language, the Examples and the ContextSpellCheckerTestSpec.

Example

For this example, we use the first Sherlock Holmes book as the training dataset.

import spark.implicits._
import com.johnsnowlabs.nlp.base.DocumentAssembler
import com.johnsnowlabs.nlp.annotators.Tokenizer
import com.johnsnowlabs.nlp.annotators.spell.context.ContextSpellCheckerApproach

import org.apache.spark.ml.Pipeline

val documentAssembler = new DocumentAssembler()
  .setInputCol("text")
  .setOutputCol("document")


val tokenizer = new Tokenizer()
  .setInputCols("document")
  .setOutputCol("token")

val spellChecker = new ContextSpellCheckerApproach()
  .setInputCols("token")
  .setOutputCol("corrected")
  .setWordMaxDistance(3)
  .setBatchSize(24)
  .setEpochs(8)
  .setLanguageModelClasses(1650)  // dependant on vocabulary size
  // .addVocabClass("_NAME_", names) // Extra classes for correction could be added like this

val pipeline = new Pipeline().setStages(Array(
  documentAssembler,
  tokenizer,
  spellChecker
))

val path = "src/test/resources/spell/sherlockholmes.txt"
val dataset = spark.sparkContext.textFile(path)
  .toDF("text")
val pipelineModel = pipeline.fit(dataset)
See also

NorvigSweetingApproach and SymmetricDeleteApproach for alternative approaches to spell checking

Linear Supertypes
WeightedLevenshtein, HasFeatures, AnnotatorApproach[ContextSpellCheckerModel], CanBeLazy, DefaultParamsWritable, MLWritable, HasOutputAnnotatorType, HasOutputAnnotationCol, HasInputAnnotationCols, Estimator[ContextSpellCheckerModel], PipelineStage, Logging, Params, Serializable, Serializable, Identifiable, AnyRef, Any
Ordering
  1. Grouped
  2. Alphabetic
  3. By Inheritance
Inherited
  1. ContextSpellCheckerApproach
  2. WeightedLevenshtein
  3. HasFeatures
  4. AnnotatorApproach
  5. CanBeLazy
  6. DefaultParamsWritable
  7. MLWritable
  8. HasOutputAnnotatorType
  9. HasOutputAnnotationCol
  10. HasInputAnnotationCols
  11. Estimator
  12. PipelineStage
  13. Logging
  14. Params
  15. Serializable
  16. Serializable
  17. Identifiable
  18. AnyRef
  19. Any
  1. Hide All
  2. Show All
Visibility
  1. Public
  2. All

Instance Constructors

  1. new ContextSpellCheckerApproach()

    Annotator reference id.

    Annotator reference id. Used to identify elements in metadata or to refer to this annotator type

  2. new ContextSpellCheckerApproach(uid: String)

    uid

    required uid for storing annotator to disk

Type Members

  1. type AnnotatorType = String
    Definition Classes
    HasOutputAnnotatorType
  2. implicit class ArrayHelper extends AnyRef

Value Members

  1. final def !=(arg0: Any): Boolean
    Definition Classes
    AnyRef → Any
  2. final def ##(): Int
    Definition Classes
    AnyRef → Any
  3. final def $[T](param: Param[T]): T
    Attributes
    protected
    Definition Classes
    Params
  4. def $$[T](feature: StructFeature[T]): T
    Attributes
    protected
    Definition Classes
    HasFeatures
  5. def $$[K, V](feature: MapFeature[K, V]): Map[K, V]
    Attributes
    protected
    Definition Classes
    HasFeatures
  6. def $$[T](feature: SetFeature[T]): Set[T]
    Attributes
    protected
    Definition Classes
    HasFeatures
  7. def $$[T](feature: ArrayFeature[T]): Array[T]
    Attributes
    protected
    Definition Classes
    HasFeatures
  8. final def ==(arg0: Any): Boolean
    Definition Classes
    AnyRef → Any
  9. def _fit(dataset: Dataset[_], recursiveStages: Option[PipelineModel]): ContextSpellCheckerModel
    Attributes
    protected
    Definition Classes
    AnnotatorApproach
  10. def addRegexClass(usrLabel: String, usrRegex: String, userDist: Int = 3): ContextSpellCheckerApproach.this.type

    Adds a new class of words to correct, based on regex.

    Adds a new class of words to correct, based on regex.

    usrLabel

    Name of the class

    usrRegex

    Regex to add

    userDist

    Maximal distance to the word

  11. def addVocabClass(usrLabel: String, vocabList: ArrayList[String], userDist: Int = 3): ContextSpellCheckerApproach.this.type

    Adds a new class of words to correct, based on a vocabulary.

    Adds a new class of words to correct, based on a vocabulary.

    usrLabel

    Name of the class

    vocabList

    Vocabulary as a list

    userDist

    Maximal distance to the word

  12. final def asInstanceOf[T0]: T0
    Definition Classes
    Any
  13. def backTrack(dist: Array[Array[Float]], s2: String, s1: String, j: Int, i: Int, acc: Seq[(String, String)]): Seq[(String, String)]
    Definition Classes
    WeightedLevenshtein
  14. val batchSize: IntParam

    Batch size for the training in NLM (Default: 24).

  15. def beforeTraining(spark: SparkSession): Unit
    Definition Classes
    AnnotatorApproach
  16. val caseStrategy: IntParam

    What case combinations to try when generating candidates (Default: CandidateStrategy.ALL).

  17. final def checkSchema(schema: StructType, inputAnnotatorType: String): Boolean
    Attributes
    protected
    Definition Classes
    HasInputAnnotationCols
  18. val classCount: Param[Double]

    Min number of times the word need to appear in corpus to not be considered of a special class (Default: 15.0).

  19. final def clear(param: Param[_]): ContextSpellCheckerApproach.this.type
    Definition Classes
    Params
  20. def clone(): AnyRef
    Attributes
    protected[lang]
    Definition Classes
    AnyRef
    Annotations
    @throws( ... ) @native()
  21. val compoundCount: Param[Int]

    Min number of times a compound word should appear to be included in vocab (Default: 5).

  22. def computeClasses(vocab: HashMap[String, Double], total: Double, k: Int): Map[String, (Int, Int)]
  23. val configProtoBytes: IntArrayParam

    Configproto from tensorflow, serialized into byte array.

    Configproto from tensorflow, serialized into byte array. Get with config_proto.SerializeToString()

  24. final def copy(extra: ParamMap): Estimator[ContextSpellCheckerModel]
    Definition Classes
    AnnotatorApproach → Estimator → PipelineStage → Params
  25. def copyValues[T <: Params](to: T, extra: ParamMap): T
    Attributes
    protected
    Definition Classes
    Params
  26. final def defaultCopy[T <: Params](extra: ParamMap): T
    Attributes
    protected
    Definition Classes
    Params
  27. val description: String
  28. val epochs: IntParam

    Number of epochs to train the language model (Default: 2).

  29. final def eq(arg0: AnyRef): Boolean
    Definition Classes
    AnyRef
  30. def equals(arg0: Any): Boolean
    Definition Classes
    AnyRef → Any
  31. val errorThreshold: FloatParam

    Threshold perplexity for a word to be considered as an error (Default: 10f).

  32. def explainParam(param: Param[_]): String
    Definition Classes
    Params
  33. def explainParams(): String
    Definition Classes
    Params
  34. final def extractParamMap(): ParamMap
    Definition Classes
    Params
  35. final def extractParamMap(extra: ParamMap): ParamMap
    Definition Classes
    Params
  36. val features: ArrayBuffer[Feature[_, _, _]]
    Definition Classes
    HasFeatures
  37. val finalRate: FloatParam

    Final learning rate for the LM (Default: 0.0005f).

  38. def finalize(): Unit
    Attributes
    protected[lang]
    Definition Classes
    AnyRef
    Annotations
    @throws( classOf[java.lang.Throwable] )
  39. final def fit(dataset: Dataset[_]): ContextSpellCheckerModel
    Definition Classes
    AnnotatorApproach → Estimator
  40. def fit(dataset: Dataset[_], paramMaps: Seq[ParamMap]): Seq[ContextSpellCheckerModel]
    Definition Classes
    Estimator
    Annotations
    @Since( "2.0.0" )
  41. def fit(dataset: Dataset[_], paramMap: ParamMap): ContextSpellCheckerModel
    Definition Classes
    Estimator
    Annotations
    @Since( "2.0.0" )
  42. def fit(dataset: Dataset[_], firstParamPair: ParamPair[_], otherParamPairs: ParamPair[_]*): ContextSpellCheckerModel
    Definition Classes
    Estimator
    Annotations
    @Since( "2.0.0" ) @varargs()
  43. def genVocab(dataset: Dataset[_]): (HashMap[String, Double], Map[String, (Int, Int)])
  44. def get[T](feature: StructFeature[T]): Option[T]
    Attributes
    protected
    Definition Classes
    HasFeatures
  45. def get[K, V](feature: MapFeature[K, V]): Option[Map[K, V]]
    Attributes
    protected
    Definition Classes
    HasFeatures
  46. def get[T](feature: SetFeature[T]): Option[Set[T]]
    Attributes
    protected
    Definition Classes
    HasFeatures
  47. def get[T](feature: ArrayFeature[T]): Option[Array[T]]
    Attributes
    protected
    Definition Classes
    HasFeatures
  48. final def get[T](param: Param[T]): Option[T]
    Definition Classes
    Params
  49. final def getClass(): Class[_]
    Definition Classes
    AnyRef → Any
    Annotations
    @native()
  50. def getConfigProtoBytes: Option[Array[Byte]]

  51. final def getDefault[T](param: Param[T]): Option[T]
    Definition Classes
    Params
  52. def getInputCols: Array[String]

    returns

    input annotations columns currently used

    Definition Classes
    HasInputAnnotationCols
  53. def getLazyAnnotator: Boolean
    Definition Classes
    CanBeLazy
  54. final def getOrDefault[T](param: Param[T]): T
    Definition Classes
    Params
  55. final def getOutputCol: String

    Gets annotation column name going to generate

    Gets annotation column name going to generate

    Definition Classes
    HasOutputAnnotationCol
  56. def getParam(paramName: String): Param[Any]
    Definition Classes
    Params
  57. val graphFolder: Param[String]

    Folder path that contain external graph files

  58. final def hasDefault[T](param: Param[T]): Boolean
    Definition Classes
    Params
  59. def hasParam(paramName: String): Boolean
    Definition Classes
    Params
  60. def hashCode(): Int
    Definition Classes
    AnyRef → Any
    Annotations
    @native()
  61. val initialRate: FloatParam

    Initial learning rate for the LM (Default: .7f).

  62. def initializeLogIfNecessary(isInterpreter: Boolean, silent: Boolean): Boolean
    Attributes
    protected
    Definition Classes
    Logging
  63. def initializeLogIfNecessary(isInterpreter: Boolean): Unit
    Attributes
    protected
    Definition Classes
    Logging
  64. val inputAnnotatorTypes: Array[String]

    Input Annotator Types: TOKEN

    Input Annotator Types: TOKEN

    Definition Classes
    ContextSpellCheckerApproachHasInputAnnotationCols
  65. final val inputCols: StringArrayParam

    columns that contain annotations necessary to run this annotator AnnotatorType is used both as input and output columns if not specified

    columns that contain annotations necessary to run this annotator AnnotatorType is used both as input and output columns if not specified

    Attributes
    protected
    Definition Classes
    HasInputAnnotationCols
  66. final def isDefined(param: Param[_]): Boolean
    Definition Classes
    Params
  67. final def isInstanceOf[T0]: Boolean
    Definition Classes
    Any
  68. final def isSet(param: Param[_]): Boolean
    Definition Classes
    Params
  69. def isTraceEnabled(): Boolean
    Attributes
    protected
    Definition Classes
    Logging
  70. val languageModelClasses: Param[Int]

    Number of classes to use during factorization of the softmax output in the LM (Default: 2000).

  71. val lazyAnnotator: BooleanParam
    Definition Classes
    CanBeLazy
  72. def learnDist(s1: String, s2: String): Seq[(String, String)]
    Definition Classes
    WeightedLevenshtein
  73. def levenshteinDist(s11: String, s22: String)(cost: (String, String) ⇒ Float): Float
    Definition Classes
    WeightedLevenshtein
  74. def loadWeights(filename: String): Map[String, Map[String, Float]]
    Definition Classes
    WeightedLevenshtein
  75. def log: Logger
    Attributes
    protected
    Definition Classes
    Logging
  76. def logDebug(msg: ⇒ String, throwable: Throwable): Unit
    Attributes
    protected
    Definition Classes
    Logging
  77. def logDebug(msg: ⇒ String): Unit
    Attributes
    protected
    Definition Classes
    Logging
  78. def logError(msg: ⇒ String, throwable: Throwable): Unit
    Attributes
    protected
    Definition Classes
    Logging
  79. def logError(msg: ⇒ String): Unit
    Attributes
    protected
    Definition Classes
    Logging
  80. def logInfo(msg: ⇒ String, throwable: Throwable): Unit
    Attributes
    protected
    Definition Classes
    Logging
  81. def logInfo(msg: ⇒ String): Unit
    Attributes
    protected
    Definition Classes
    Logging
  82. def logName: String
    Attributes
    protected
    Definition Classes
    Logging
  83. def logTrace(msg: ⇒ String, throwable: Throwable): Unit
    Attributes
    protected
    Definition Classes
    Logging
  84. def logTrace(msg: ⇒ String): Unit
    Attributes
    protected
    Definition Classes
    Logging
  85. def logWarning(msg: ⇒ String, throwable: Throwable): Unit
    Attributes
    protected
    Definition Classes
    Logging
  86. def logWarning(msg: ⇒ String): Unit
    Attributes
    protected
    Definition Classes
    Logging
  87. val maxCandidates: IntParam

    Maximum number of candidates for every word (Default: 6).

  88. val maxSentLen: IntParam

    Maximum length for a sentence - internal use during training (Default: 250)

  89. val maxWindowLen: IntParam

    Maximum size for the window used to remember history prior to every correction (Default: 5).

  90. val minCount: Param[Double]

    Min number of times a token should appear to be included in vocab (Default: 3.0).

  91. def msgHelper(schema: StructType): String
    Attributes
    protected
    Definition Classes
    HasInputAnnotationCols
  92. final def ne(arg0: AnyRef): Boolean
    Definition Classes
    AnyRef
  93. final def notify(): Unit
    Definition Classes
    AnyRef
    Annotations
    @native()
  94. final def notifyAll(): Unit
    Definition Classes
    AnyRef
    Annotations
    @native()
  95. def onTrained(model: ContextSpellCheckerModel, spark: SparkSession): Unit
    Definition Classes
    AnnotatorApproach
  96. val optionalInputAnnotatorTypes: Array[String]
    Definition Classes
    HasInputAnnotationCols
  97. val outputAnnotatorType: AnnotatorType

    Output Annotator Types: TOKEN

    Output Annotator Types: TOKEN

    Definition Classes
    ContextSpellCheckerApproachHasOutputAnnotatorType
  98. final val outputCol: Param[String]
    Attributes
    protected
    Definition Classes
    HasOutputAnnotationCol
  99. lazy val params: Array[Param[_]]
    Definition Classes
    Params
  100. def save(path: String): Unit
    Definition Classes
    MLWritable
    Annotations
    @Since( "1.6.0" ) @throws( ... )
  101. def set[T](feature: StructFeature[T], value: T): ContextSpellCheckerApproach.this.type
    Attributes
    protected
    Definition Classes
    HasFeatures
  102. def set[K, V](feature: MapFeature[K, V], value: Map[K, V]): ContextSpellCheckerApproach.this.type
    Attributes
    protected
    Definition Classes
    HasFeatures
  103. def set[T](feature: SetFeature[T], value: Set[T]): ContextSpellCheckerApproach.this.type
    Attributes
    protected
    Definition Classes
    HasFeatures
  104. def set[T](feature: ArrayFeature[T], value: Array[T]): ContextSpellCheckerApproach.this.type
    Attributes
    protected
    Definition Classes
    HasFeatures
  105. final def set(paramPair: ParamPair[_]): ContextSpellCheckerApproach.this.type
    Attributes
    protected
    Definition Classes
    Params
  106. final def set(param: String, value: Any): ContextSpellCheckerApproach.this.type
    Attributes
    protected
    Definition Classes
    Params
  107. final def set[T](param: Param[T], value: T): ContextSpellCheckerApproach.this.type
    Definition Classes
    Params
  108. def setBatchSize(k: Int): ContextSpellCheckerApproach.this.type

  109. def setCaseStrategy(k: Int): ContextSpellCheckerApproach.this.type

  110. def setClassCount(t: Double): ContextSpellCheckerApproach.this.type

  111. def setCompoundCount(k: Int): ContextSpellCheckerApproach.this.type

  112. def setConfigProtoBytes(bytes: Array[Int]): ContextSpellCheckerApproach.this.type

  113. def setDefault[T](feature: StructFeature[T], value: () ⇒ T): ContextSpellCheckerApproach.this.type
    Attributes
    protected
    Definition Classes
    HasFeatures
  114. def setDefault[K, V](feature: MapFeature[K, V], value: () ⇒ Map[K, V]): ContextSpellCheckerApproach.this.type
    Attributes
    protected
    Definition Classes
    HasFeatures
  115. def setDefault[T](feature: SetFeature[T], value: () ⇒ Set[T]): ContextSpellCheckerApproach.this.type
    Attributes
    protected
    Definition Classes
    HasFeatures
  116. def setDefault[T](feature: ArrayFeature[T], value: () ⇒ Array[T]): ContextSpellCheckerApproach.this.type
    Attributes
    protected
    Definition Classes
    HasFeatures
  117. final def setDefault(paramPairs: ParamPair[_]*): ContextSpellCheckerApproach.this.type
    Attributes
    protected
    Definition Classes
    Params
  118. final def setDefault[T](param: Param[T], value: T): ContextSpellCheckerApproach.this.type
    Attributes
    protected[org.apache.spark.ml]
    Definition Classes
    Params
  119. def setEpochs(k: Int): ContextSpellCheckerApproach.this.type

  120. def setErrorThreshold(t: Float): ContextSpellCheckerApproach.this.type

  121. def setFinalRate(r: Float): ContextSpellCheckerApproach.this.type

  122. def setGraphFolder(path: String): ContextSpellCheckerApproach.this.type

    Folder path that contain external graph files

  123. def setInitialRate(r: Float): ContextSpellCheckerApproach.this.type

  124. final def setInputCols(value: String*): ContextSpellCheckerApproach.this.type
    Definition Classes
    HasInputAnnotationCols
  125. def setInputCols(value: Array[String]): ContextSpellCheckerApproach.this.type

    Overrides required annotators column if different than default

    Overrides required annotators column if different than default

    Definition Classes
    HasInputAnnotationCols
  126. def setLanguageModelClasses(k: Int): ContextSpellCheckerApproach.this.type

  127. def setLazyAnnotator(value: Boolean): ContextSpellCheckerApproach.this.type
    Definition Classes
    CanBeLazy
  128. def setMaxCandidates(k: Int): ContextSpellCheckerApproach.this.type

  129. def setMaxWindowLen(w: Int): ContextSpellCheckerApproach.this.type

  130. def setMinCount(threshold: Double): ContextSpellCheckerApproach.this.type

  131. final def setOutputCol(value: String): ContextSpellCheckerApproach.this.type

    Overrides annotation column name when transforming

    Overrides annotation column name when transforming

    Definition Classes
    HasOutputAnnotationCol
  132. def setSpecialClasses(parsers: List[SpecialClassParser]): ContextSpellCheckerApproach.this.type

  133. def setTradeoff(alpha: Float): ContextSpellCheckerApproach.this.type

  134. def setValidationFraction(r: Float): ContextSpellCheckerApproach.this.type

  135. def setWeightedDistPath(filePath: String): ContextSpellCheckerApproach.this.type

  136. def setWordMaxDistance(k: Int): ContextSpellCheckerApproach.this.type

  137. val specialClasses: Param[List[SpecialClassParser]]

    List of parsers for special classes (Default: List(new DateToken, new NumberToken)).

  138. final def synchronized[T0](arg0: ⇒ T0): T0
    Definition Classes
    AnyRef
  139. def toString(): String
    Definition Classes
    Identifiable → AnyRef → Any
  140. val tradeoff: Param[Float]

    Tradeoff between the cost of a word error and a transition in the language model (Default: 18.0f).

  141. def train(dataset: Dataset[_], recursivePipeline: Option[PipelineModel]): ContextSpellCheckerModel
  142. final def transformSchema(schema: StructType): StructType

    requirement for pipeline transformation validation.

    requirement for pipeline transformation validation. It is called on fit()

    Definition Classes
    AnnotatorApproach → PipelineStage
  143. def transformSchema(schema: StructType, logging: Boolean): StructType
    Attributes
    protected
    Definition Classes
    PipelineStage
    Annotations
    @DeveloperApi()
  144. val uid: String
    Definition Classes
    ContextSpellCheckerApproach → Identifiable
  145. def validate(schema: StructType): Boolean

    takes a Dataset and checks to see if all the required annotation types are present.

    takes a Dataset and checks to see if all the required annotation types are present.

    schema

    to be validated

    returns

    True if all the required types are present, else false

    Attributes
    protected
    Definition Classes
    AnnotatorApproach
  146. val validationFraction: FloatParam

    Percentage of datapoints to use for validation (Default: .1f).

  147. def wLevenshteinDist(s1: String, s2: String, weights: Map[String, Map[String, Float]]): Float
    Definition Classes
    WeightedLevenshtein
  148. final def wait(): Unit
    Definition Classes
    AnyRef
    Annotations
    @throws( ... )
  149. final def wait(arg0: Long, arg1: Int): Unit
    Definition Classes
    AnyRef
    Annotations
    @throws( ... )
  150. final def wait(arg0: Long): Unit
    Definition Classes
    AnyRef
    Annotations
    @throws( ... ) @native()
  151. val weightedDistPath: Param[String]

    The path to the file containing the weights for the levenshtein distance.

  152. val wordMaxDistance: IntParam

    Maximum distance for the generated candidates for every word (Default: 3).

  153. def write: MLWriter
    Definition Classes
    DefaultParamsWritable → MLWritable

Inherited from WeightedLevenshtein

Inherited from HasFeatures

Inherited from CanBeLazy

Inherited from DefaultParamsWritable

Inherited from MLWritable

Inherited from HasOutputAnnotatorType

Inherited from HasOutputAnnotationCol

Inherited from HasInputAnnotationCols

Inherited from Estimator[ContextSpellCheckerModel]

Inherited from PipelineStage

Inherited from Logging

Inherited from Params

Inherited from Serializable

Inherited from Serializable

Inherited from Identifiable

Inherited from AnyRef

Inherited from Any

Parameters

A list of (hyper-)parameter keys this annotator can take. Users can set and get the parameter values through setters and getters, respectively.

Annotator types

Required input and expected output annotator types

Members

Parameter setters

Parameter getters