Packages

class WordSegmenterApproach extends AnnotatorApproach[WordSegmenterModel] with PerceptronTrainingUtils

Trains a WordSegmenter which tokenizes non-english or non-whitespace separated texts.

Many languages are not whitespace separated and their sentences are a concatenation of many symbols, like Korean, Japanese or Chinese. Without understanding the language, splitting the words into their corresponding tokens is impossible. The WordSegmenter is trained to understand these languages and split them into semantically correct parts.

This annotator is based on the paper Chinese Word Segmentation as Character Tagging [1]. Word segmentation is treated as a tagging problem. Each character is be tagged as on of four different labels: LL (left boundary), RR (right boundary), MM (middle) and LR (word by itself). The label depends on the position of the word in the sentence. LL tagged words will combine with the word on the right. Likewise, RR tagged words combine with words on the left. MM tagged words are treated as the middle of the word and combine with either side. LR tagged words are words by themselves.

Example (from [1], Example 3(a) (raw), 3(b) (tagged), 3(c) (translation)):

  • 上海 计划 到 本 世纪 末 实现 人均 国内 生产 总值 五千 美元
  • 上/LL 海/RR 计/LL 划/RR 到/LR 本/LR 世/LL 纪/RR 末/LR 实/LL 现/RR 人/LL 均/RR 国/LL 内/RR 生/LL 产/RR 总/LL 值/RR 五/LL 千/RR 美/LL 元/RR
  • Shanghai plans to reach the goal of 5,000 dollars in per capita GDP by the end of the century.

For instantiated/pretrained models, see WordSegmenterModel.

To train your own model, a training dataset consisting of Part-Of-Speech tags is required. The data has to be loaded into a dataframe, where the column is an Annotation of type "POS". This can be set with setPosColumn.

Tip: The helper class POS might be useful to read training data into data frames. nl For extended examples of usage, see the Examples and the WordSegmenterTest.

References:

  • [1] Xue, Nianwen. “Chinese Word Segmentation as Character Tagging.” International Journal of Computational Linguistics & Chinese Language Processing, Volume 8, Number 1, February 2003: Special Issue on Word Formation and Chinese Language Processing, 2003, pp. 29-48. ACLWeb, https://aclanthology.org/O03-4002.

Example

In this example, "chinese_train.utf8" is in the form of

十|LL 四|RR 不|LL 是|RR 四|LL 十|RR

and is loaded with the POS class to create a dataframe of "POS" type Annotations.

import com.johnsnowlabs.nlp.base.DocumentAssembler
import com.johnsnowlabs.nlp.annotators.ws.WordSegmenterApproach
import com.johnsnowlabs.nlp.training.POS
import org.apache.spark.ml.Pipeline

val documentAssembler = new DocumentAssembler()
  .setInputCol("text")
  .setOutputCol("document")

val wordSegmenter = new WordSegmenterApproach()
  .setInputCols("document")
  .setOutputCol("token")
  .setPosColumn("tags")
  .setNIterations(5)

val pipeline = new Pipeline().setStages(Array(
  documentAssembler,
  wordSegmenter
))

val trainingDataSet = POS().readDataset(
  spark,
  "src/test/resources/word-segmenter/chinese_train.utf8"
)

val pipelineModel = pipeline.fit(trainingDataSet)
Linear Supertypes
PerceptronTrainingUtils, PerceptronUtils, AnnotatorApproach[WordSegmenterModel], CanBeLazy, DefaultParamsWritable, MLWritable, HasOutputAnnotatorType, HasOutputAnnotationCol, HasInputAnnotationCols, Estimator[WordSegmenterModel], PipelineStage, Logging, Params, Serializable, Serializable, Identifiable, AnyRef, Any
Ordering
  1. Grouped
  2. Alphabetic
  3. By Inheritance
Inherited
  1. WordSegmenterApproach
  2. PerceptronTrainingUtils
  3. PerceptronUtils
  4. AnnotatorApproach
  5. CanBeLazy
  6. DefaultParamsWritable
  7. MLWritable
  8. HasOutputAnnotatorType
  9. HasOutputAnnotationCol
  10. HasInputAnnotationCols
  11. Estimator
  12. PipelineStage
  13. Logging
  14. Params
  15. Serializable
  16. Serializable
  17. Identifiable
  18. AnyRef
  19. Any
  1. Hide All
  2. Show All
Visibility
  1. Public
  2. All

Instance Constructors

  1. new WordSegmenterApproach()

    Annotator reference id.

    Annotator reference id. Used to identify elements in metadata or to refer to this annotator type

  2. new WordSegmenterApproach(uid: String)

    uid

    required uid for storing annotator to disk

Type Members

  1. type AnnotatorType = String
    Definition Classes
    HasOutputAnnotatorType

Value Members

  1. final def !=(arg0: Any): Boolean
    Definition Classes
    AnyRef → Any
  2. final def ##(): Int
    Definition Classes
    AnyRef → Any
  3. final def $[T](param: Param[T]): T
    Attributes
    protected
    Definition Classes
    Params
  4. final def ==(arg0: Any): Boolean
    Definition Classes
    AnyRef → Any
  5. def _fit(dataset: Dataset[_], recursiveStages: Option[PipelineModel]): WordSegmenterModel
    Attributes
    protected
    Definition Classes
    AnnotatorApproach
  6. val ambiguityThreshold: DoubleParam

    How much percentage of total amount of words are covered to be marked as frequent (Default: 0.97)

  7. final def asInstanceOf[T0]: T0
    Definition Classes
    Any
  8. def beforeTraining(spark: SparkSession): Unit
    Definition Classes
    AnnotatorApproach
  9. def buildTagBook(taggedSentences: Array[TaggedSentence], frequencyThreshold: Int, ambiguityThreshold: Double): Map[String, String]

    Finds very frequent tags on a word in training, and marks them as non ambiguous based on tune parameters ToDo: Move such parameters to configuration

    Finds very frequent tags on a word in training, and marks them as non ambiguous based on tune parameters ToDo: Move such parameters to configuration

    taggedSentences

    Takes entire tagged sentences to find frequent tags

    frequencyThreshold

    How many times at least a tag on a word to be marked as frequent

    ambiguityThreshold

    How much percentage of total amount of words are covered to be marked as frequent

    Definition Classes
    PerceptronTrainingUtils
  10. final def checkSchema(schema: StructType, inputAnnotatorType: String): Boolean
    Attributes
    protected
    Definition Classes
    HasInputAnnotationCols
  11. final def clear(param: Param[_]): WordSegmenterApproach.this.type
    Definition Classes
    Params
  12. def clone(): AnyRef
    Attributes
    protected[lang]
    Definition Classes
    AnyRef
    Annotations
    @throws( ... ) @native()
  13. final def copy(extra: ParamMap): Estimator[WordSegmenterModel]
    Definition Classes
    AnnotatorApproach → Estimator → PipelineStage → Params
  14. def copyValues[T <: Params](to: T, extra: ParamMap): T
    Attributes
    protected
    Definition Classes
    Params
  15. final def defaultCopy[T <: Params](extra: ParamMap): T
    Attributes
    protected
    Definition Classes
    Params
  16. val description: String
  17. val enableRegexTokenizer: BooleanParam
  18. final def eq(arg0: AnyRef): Boolean
    Definition Classes
    AnyRef
  19. def equals(arg0: Any): Boolean
    Definition Classes
    AnyRef → Any
  20. def explainParam(param: Param[_]): String
    Definition Classes
    Params
  21. def explainParams(): String
    Definition Classes
    Params
  22. final def extractParamMap(): ParamMap
    Definition Classes
    Params
  23. final def extractParamMap(extra: ParamMap): ParamMap
    Definition Classes
    Params
  24. def finalize(): Unit
    Attributes
    protected[lang]
    Definition Classes
    AnyRef
    Annotations
    @throws( classOf[java.lang.Throwable] )
  25. final def fit(dataset: Dataset[_]): WordSegmenterModel
    Definition Classes
    AnnotatorApproach → Estimator
  26. def fit(dataset: Dataset[_], paramMaps: Seq[ParamMap]): Seq[WordSegmenterModel]
    Definition Classes
    Estimator
    Annotations
    @Since( "2.0.0" )
  27. def fit(dataset: Dataset[_], paramMap: ParamMap): WordSegmenterModel
    Definition Classes
    Estimator
    Annotations
    @Since( "2.0.0" )
  28. def fit(dataset: Dataset[_], firstParamPair: ParamPair[_], otherParamPairs: ParamPair[_]*): WordSegmenterModel
    Definition Classes
    Estimator
    Annotations
    @Since( "2.0.0" ) @varargs()
  29. val frequencyThreshold: IntParam

    How many times at least a tag on a word to be marked as frequent (Default: 20)

  30. def generatesTagBook(dataset: Dataset[_]): Array[TaggedSentence]

    Generates TagBook, which holds all the word to tags mapping that are not ambiguous

    Generates TagBook, which holds all the word to tags mapping that are not ambiguous

    Definition Classes
    PerceptronTrainingUtils
  31. final def get[T](param: Param[T]): Option[T]
    Definition Classes
    Params
  32. final def getClass(): Class[_]
    Definition Classes
    AnyRef → Any
    Annotations
    @native()
  33. final def getDefault[T](param: Param[T]): Option[T]
    Definition Classes
    Params
  34. def getInputCols: Array[String]

    returns

    input annotations columns currently used

    Definition Classes
    HasInputAnnotationCols
  35. def getLazyAnnotator: Boolean
    Definition Classes
    CanBeLazy
  36. def getNIterations: Int

  37. final def getOrDefault[T](param: Param[T]): T
    Definition Classes
    Params
  38. final def getOutputCol: String

    Gets annotation column name going to generate

    Gets annotation column name going to generate

    Definition Classes
    HasOutputAnnotationCol
  39. def getParam(paramName: String): Param[Any]
    Definition Classes
    Params
  40. final def hasDefault[T](param: Param[T]): Boolean
    Definition Classes
    Params
  41. def hasParam(paramName: String): Boolean
    Definition Classes
    Params
  42. def hashCode(): Int
    Definition Classes
    AnyRef → Any
    Annotations
    @native()
  43. def initializeLogIfNecessary(isInterpreter: Boolean, silent: Boolean): Boolean
    Attributes
    protected
    Definition Classes
    Logging
  44. def initializeLogIfNecessary(isInterpreter: Boolean): Unit
    Attributes
    protected
    Definition Classes
    Logging
  45. val inputAnnotatorTypes: Array[String]

    Input Annotator Types: DOCUMENT

    Input Annotator Types: DOCUMENT

    Definition Classes
    WordSegmenterApproachHasInputAnnotationCols
  46. final val inputCols: StringArrayParam

    columns that contain annotations necessary to run this annotator AnnotatorType is used both as input and output columns if not specified

    columns that contain annotations necessary to run this annotator AnnotatorType is used both as input and output columns if not specified

    Attributes
    protected
    Definition Classes
    HasInputAnnotationCols
  47. final def isDefined(param: Param[_]): Boolean
    Definition Classes
    Params
  48. final def isInstanceOf[T0]: Boolean
    Definition Classes
    Any
  49. final def isSet(param: Param[_]): Boolean
    Definition Classes
    Params
  50. def isTraceEnabled(): Boolean
    Attributes
    protected
    Definition Classes
    Logging
  51. val lazyAnnotator: BooleanParam
    Definition Classes
    CanBeLazy
  52. def log: Logger
    Attributes
    protected
    Definition Classes
    Logging
  53. def logDebug(msg: ⇒ String, throwable: Throwable): Unit
    Attributes
    protected
    Definition Classes
    Logging
  54. def logDebug(msg: ⇒ String): Unit
    Attributes
    protected
    Definition Classes
    Logging
  55. def logError(msg: ⇒ String, throwable: Throwable): Unit
    Attributes
    protected
    Definition Classes
    Logging
  56. def logError(msg: ⇒ String): Unit
    Attributes
    protected
    Definition Classes
    Logging
  57. def logInfo(msg: ⇒ String, throwable: Throwable): Unit
    Attributes
    protected
    Definition Classes
    Logging
  58. def logInfo(msg: ⇒ String): Unit
    Attributes
    protected
    Definition Classes
    Logging
  59. def logName: String
    Attributes
    protected
    Definition Classes
    Logging
  60. def logTrace(msg: ⇒ String, throwable: Throwable): Unit
    Attributes
    protected
    Definition Classes
    Logging
  61. def logTrace(msg: ⇒ String): Unit
    Attributes
    protected
    Definition Classes
    Logging
  62. def logWarning(msg: ⇒ String, throwable: Throwable): Unit
    Attributes
    protected
    Definition Classes
    Logging
  63. def logWarning(msg: ⇒ String): Unit
    Attributes
    protected
    Definition Classes
    Logging
  64. def msgHelper(schema: StructType): String
    Attributes
    protected
    Definition Classes
    HasInputAnnotationCols
  65. val nIterations: IntParam

    Number of iterations in training, converges to better accuracy (Default: 5)

  66. final def ne(arg0: AnyRef): Boolean
    Definition Classes
    AnyRef
  67. final def notify(): Unit
    Definition Classes
    AnyRef
    Annotations
    @native()
  68. final def notifyAll(): Unit
    Definition Classes
    AnyRef
    Annotations
    @native()
  69. def onTrained(model: WordSegmenterModel, spark: SparkSession): Unit
    Definition Classes
    AnnotatorApproach
  70. val optionalInputAnnotatorTypes: Array[String]
    Definition Classes
    HasInputAnnotationCols
  71. val outputAnnotatorType: AnnotatorType

    Output Annotator Types: TOKEN

    Output Annotator Types: TOKEN

    Definition Classes
    WordSegmenterApproachHasOutputAnnotatorType
  72. final val outputCol: Param[String]
    Attributes
    protected
    Definition Classes
    HasOutputAnnotationCol
  73. lazy val params: Array[Param[_]]
    Definition Classes
    Params
  74. val pattern: Param[String]

    Regex pattern used to match delimiters (Default: "\\s+")

  75. val posCol: Param[String]

    Column of Array of POS tags that match tokens

  76. def save(path: String): Unit
    Definition Classes
    MLWritable
    Annotations
    @Since( "1.6.0" ) @throws( ... )
  77. final def set(paramPair: ParamPair[_]): WordSegmenterApproach.this.type
    Attributes
    protected
    Definition Classes
    Params
  78. final def set(param: String, value: Any): WordSegmenterApproach.this.type
    Attributes
    protected
    Definition Classes
    Params
  79. final def set[T](param: Param[T], value: T): WordSegmenterApproach.this.type
    Definition Classes
    Params
  80. def setAmbiguityThreshold(value: Double): WordSegmenterApproach.this.type

  81. final def setDefault(paramPairs: ParamPair[_]*): WordSegmenterApproach.this.type
    Attributes
    protected
    Definition Classes
    Params
  82. final def setDefault[T](param: Param[T], value: T): WordSegmenterApproach.this.type
    Attributes
    protected[org.apache.spark.ml]
    Definition Classes
    Params
  83. def setEnableRegexTokenizer(value: Boolean): WordSegmenterApproach.this.type

  84. def setFrequencyThreshold(value: Int): WordSegmenterApproach.this.type

  85. final def setInputCols(value: String*): WordSegmenterApproach.this.type
    Definition Classes
    HasInputAnnotationCols
  86. def setInputCols(value: Array[String]): WordSegmenterApproach.this.type

    Overrides required annotators column if different than default

    Overrides required annotators column if different than default

    Definition Classes
    HasInputAnnotationCols
  87. def setLazyAnnotator(value: Boolean): WordSegmenterApproach.this.type
    Definition Classes
    CanBeLazy
  88. def setNIterations(value: Int): WordSegmenterApproach.this.type

  89. final def setOutputCol(value: String): WordSegmenterApproach.this.type

    Overrides annotation column name when transforming

    Overrides annotation column name when transforming

    Definition Classes
    HasOutputAnnotationCol
  90. def setPattern(value: String): WordSegmenterApproach.this.type

  91. def setPosColumn(value: String): WordSegmenterApproach.this.type

  92. def setToLowercase(value: Boolean): WordSegmenterApproach.this.type

  93. final def synchronized[T0](arg0: ⇒ T0): T0
    Definition Classes
    AnyRef
  94. val toLowercase: BooleanParam

    Indicates whether to convert all characters to lowercase before tokenizing (Default: false).

  95. def toString(): String
    Definition Classes
    Identifiable → AnyRef → Any
  96. def train(dataset: Dataset[_], recursivePipeline: Option[PipelineModel]): WordSegmenterModel
  97. def trainPerceptron(nIterations: Int, initialModel: TrainingPerceptronLegacy, taggedSentences: Array[TaggedSentence], taggedWordBook: Map[String, String]): AveragedPerceptron

    Iterates for training

    Iterates for training

    Definition Classes
    PerceptronTrainingUtils
  98. final def transformSchema(schema: StructType): StructType

    requirement for pipeline transformation validation.

    requirement for pipeline transformation validation. It is called on fit()

    Definition Classes
    AnnotatorApproach → PipelineStage
  99. def transformSchema(schema: StructType, logging: Boolean): StructType
    Attributes
    protected
    Definition Classes
    PipelineStage
    Annotations
    @DeveloperApi()
  100. val uid: String
    Definition Classes
    WordSegmenterApproach → Identifiable
  101. def validate(schema: StructType): Boolean

    takes a Dataset and checks to see if all the required annotation types are present.

    takes a Dataset and checks to see if all the required annotation types are present.

    schema

    to be validated

    returns

    True if all the required types are present, else false

    Attributes
    protected
    Definition Classes
    AnnotatorApproach
  102. final def wait(): Unit
    Definition Classes
    AnyRef
    Annotations
    @throws( ... )
  103. final def wait(arg0: Long, arg1: Int): Unit
    Definition Classes
    AnyRef
    Annotations
    @throws( ... )
  104. final def wait(arg0: Long): Unit
    Definition Classes
    AnyRef
    Annotations
    @throws( ... ) @native()
  105. def write: MLWriter
    Definition Classes
    DefaultParamsWritable → MLWritable

Inherited from PerceptronTrainingUtils

Inherited from PerceptronUtils

Inherited from CanBeLazy

Inherited from DefaultParamsWritable

Inherited from MLWritable

Inherited from HasOutputAnnotatorType

Inherited from HasOutputAnnotationCol

Inherited from HasInputAnnotationCols

Inherited from Estimator[WordSegmenterModel]

Inherited from PipelineStage

Inherited from Logging

Inherited from Params

Inherited from Serializable

Inherited from Serializable

Inherited from Identifiable

Inherited from AnyRef

Inherited from Any

Parameters

A list of (hyper-)parameter keys this annotator can take. Users can set and get the parameter values through setters and getters, respectively.

Annotator types

Required input and expected output annotator types

Members

Parameter setters

Parameter getters