class LLMEntityExtractor extends AnnotatorModel[LLMEntityExtractor] with HasBatchedAnnotate[LLMEntityExtractor] with HasLlamaCppModelProperties with HasLlamaCppInferenceProperties with HasProtectedParams with HasEngine

End-to-end LLM-based entity extraction using AutoGGUF with BNF grammars.

LLMEntityExtractor is an end-to-end annotator that performs entity extraction from text using Large Language Models (LLMs) with structured JSON output via BNF grammars. It embeds AutoGGUFModel directly and uses simple string matching to compute character indices for extracted entities.

This annotator follows the LangExtract pattern from Google Research, combining few-shot prompting with constrained generation through llama.cpp BNF grammars to ensure valid JSON output.

The LLM generates responses in this format (enforced by grammar):

{
  "extractions": [
    {"entity": "MEDICATION", "text": "aspirin"},
    {"entity": "DOSAGE", "text": "250mg"}
  ]
}

The annotator then performs string matching to find the character positions of each entity in the original text, outputting CHUNK annotations with accurate begin/end indices.

Batch processing is used for performance - all documents are processed together in a single LLM call via multiComplete for maximum throughput.

Example

import spark.implicits._
import com.johnsnowlabs.nlp.base._
import com.johnsnowlabs.nlp.annotators.ner.dl.LLMEntityExtractor
import org.apache.spark.ml.Pipeline

val documentAssembler = new DocumentAssembler()
  .setInputCol("text")
  .setOutputCol("document")

val entityExtractor = LLMEntityExtractor
  .pretrained("qwen3_4b_bf16_gguf")
  .setInputCols("document")
  .setOutputCol("entities")
  .setEntityTypes(Array("MEDICATION", "DOSAGE", "ROUTE", "FREQUENCY"))
  .setNPredict(500)
  .setTemperature(0.1f)

val pipeline = new Pipeline().setStages(Array(documentAssembler, entityExtractor))

val data = Seq("Patient prescribed 500mg amoxicillin PO TID").toDF("text")
val result = pipeline.fit(data).transform(data)

result.select("entities.result", "entities.metadata").show(false)
+------------------------------+--------------------------------+
|result                        |metadata                        |
+------------------------------+--------------------------------+
|[500mg, amoxicillin, PO, TID] |[{entity -> DOSAGE}, ...]       |
+------------------------------+--------------------------------+
Ordering
  1. Grouped
  2. Alphabetic
  3. By Inheritance
Inherited
  1. LLMEntityExtractor
  2. HasEngine
  3. HasProtectedParams
  4. HasLlamaCppInferenceProperties
  5. HasLlamaCppModelProperties
  6. HasBatchedAnnotate
  7. AnnotatorModel
  8. CanBeLazy
  9. RawAnnotator
  10. HasOutputAnnotationCol
  11. HasInputAnnotationCols
  12. HasOutputAnnotatorType
  13. ParamsAndFeaturesWritable
  14. HasFeatures
  15. DefaultParamsWritable
  16. MLWritable
  17. Model
  18. Transformer
  19. PipelineStage
  20. Logging
  21. Params
  22. Serializable
  23. Serializable
  24. Identifiable
  25. AnyRef
  26. Any
  1. Hide All
  2. Show All
Visibility
  1. Public
  2. All

Instance Constructors

  1. new LLMEntityExtractor()

    Annotator reference id.

    Annotator reference id. Used to identify elements in metadata or to refer to this annotator type

  2. new LLMEntityExtractor(uid: String)

    uid

    required uid for storing annotator to disk

Type Members

  1. implicit class ProtectedParam[T] extends Param[T]
    Definition Classes
    HasProtectedParams
  2. type AnnotationContent = Seq[Row]

    internal types to show Rows as a relevant StructType Should be deleted once Spark releases UserDefinedTypes to @developerAPI

    internal types to show Rows as a relevant StructType Should be deleted once Spark releases UserDefinedTypes to @developerAPI

    Attributes
    protected
    Definition Classes
    AnnotatorModel
  3. type AnnotatorType = String
    Definition Classes
    HasOutputAnnotatorType

Value Members

  1. final def !=(arg0: Any): Boolean
    Definition Classes
    AnyRef → Any
  2. final def ##(): Int
    Definition Classes
    AnyRef → Any
  3. final def $[T](param: Param[T]): T
    Attributes
    protected
    Definition Classes
    Params
  4. def $$[T](feature: StructFeature[T]): T
    Attributes
    protected
    Definition Classes
    HasFeatures
  5. def $$[K, V](feature: MapFeature[K, V]): Map[K, V]
    Attributes
    protected
    Definition Classes
    HasFeatures
  6. def $$[T](feature: SetFeature[T]): Set[T]
    Attributes
    protected
    Definition Classes
    HasFeatures
  7. def $$[T](feature: ArrayFeature[T]): Array[T]
    Attributes
    protected
    Definition Classes
    HasFeatures
  8. final def ==(arg0: Any): Boolean
    Definition Classes
    AnyRef → Any
  9. def _transform(dataset: Dataset[_], recursivePipeline: Option[PipelineModel]): DataFrame
    Attributes
    protected
    Definition Classes
    AnnotatorModel
  10. def afterAnnotate(dataset: DataFrame): DataFrame
    Attributes
    protected
    Definition Classes
    AnnotatorModel
  11. final def asInstanceOf[T0]: T0
    Definition Classes
    Any
  12. def batchAnnotate(batchedAnnotations: Seq[Array[Annotation]]): Seq[Seq[Annotation]]

    Batch annotation method - processes all documents through AutoGGUF

    Batch annotation method - processes all documents through AutoGGUF

    batchedAnnotations

    Batched annotations (documents) to process

    returns

    Extracted entity annotations with character indices for each document

    Definition Classes
    LLMEntityExtractorHasBatchedAnnotate
  13. def batchProcess(rows: Iterator[_]): Iterator[Row]
    Definition Classes
    HasBatchedAnnotate
  14. val batchSize: IntParam

    Size of every batch (Default depends on model).

    Size of every batch (Default depends on model).

    Definition Classes
    HasBatchedAnnotate
  15. def beforeAnnotate(dataset: Dataset[_]): Dataset[_]
    Attributes
    protected
    Definition Classes
    AnnotatorModel
  16. val cachePrompt: BooleanParam

  17. val caseSensitive: BooleanParam

    Case sensitivity for entity matching (Default: false)

    Case sensitivity for entity matching (Default: false)

    When false, entity matching is case-insensitive.

  18. val chatTemplate: Param[String]

    Definition Classes
    HasLlamaCppModelProperties
  19. final def checkSchema(schema: StructType, inputAnnotatorType: String): Boolean
    Attributes
    protected
    Definition Classes
    HasInputAnnotationCols
  20. final def clear(param: Param[_]): LLMEntityExtractor.this.type
    Definition Classes
    Params
  21. def clone(): AnyRef
    Attributes
    protected[lang]
    Definition Classes
    AnyRef
    Annotations
    @throws( ... ) @native()
  22. def close(): Unit

    Closes the llama.cpp model backend freeing resources.

    Closes the llama.cpp model backend freeing resources. The model is reloaded when used again.

  23. def copy(extra: ParamMap): LLMEntityExtractor

    requirement for annotators copies

    requirement for annotators copies

    Definition Classes
    RawAnnotator → Model → Transformer → PipelineStage → Params
  24. def copyValues[T <: Params](to: T, extra: ParamMap): T
    Attributes
    protected
    Definition Classes
    Params
  25. final def defaultCopy[T <: Params](extra: ParamMap): T
    Attributes
    protected
    Definition Classes
    Params
  26. val defaultGrammar: String
  27. val defaultPrompt: String
  28. val defragmentationThreshold: FloatParam

    Definition Classes
    HasLlamaCppModelProperties
  29. val disableLog: BooleanParam

    Definition Classes
    HasLlamaCppModelProperties
  30. val disableTokenIds: IntArrayParam

  31. val dynamicTemperatureExponent: FloatParam

  32. val dynamicTemperatureRange: FloatParam

  33. val engine: Param[String]

    This param is set internally once via loadSavedModel.

    This param is set internally once via loadSavedModel. That's why there is no setter

    Definition Classes
    HasEngine
  34. val entityTypes: StringArrayParam

    List of entity types to extract (Default: general types)

    List of entity types to extract (Default: general types)

    These entity types are used in the prompt to guide the LLM's extraction.

  35. final def eq(arg0: AnyRef): Boolean
    Definition Classes
    AnyRef
  36. def equals(arg0: Any): Boolean
    Definition Classes
    AnyRef → Any
  37. def explainParam(param: Param[_]): String
    Definition Classes
    Params
  38. def explainParams(): String
    Definition Classes
    Params
  39. final val extraInputCols: StringArrayParam
    Attributes
    protected
    Definition Classes
    HasInputAnnotationCols
  40. def extraValidate(structType: StructType): Boolean
    Attributes
    protected
    Definition Classes
    RawAnnotator
  41. def extraValidateMsg: String

    Override for additional custom schema checks

    Override for additional custom schema checks

    Attributes
    protected
    Definition Classes
    RawAnnotator
  42. final def extractParamMap(): ParamMap
    Definition Classes
    Params
  43. final def extractParamMap(extra: ParamMap): ParamMap
    Definition Classes
    Params
  44. val features: ArrayBuffer[Feature[_, _, _]]
    Definition Classes
    HasFeatures
  45. val fewShotExamples: Param[Array[(String, String)]]

    Few-shot examples for the prompt (Default: empty array)

    Few-shot examples for the prompt (Default: empty array)

    Each example should be a tuple of (input_text, json_output). These examples will be inserted into the prompt to help the LLM understand the expected output format.

  46. def finalize(): Unit
    Attributes
    protected[lang]
    Definition Classes
    AnyRef
    Annotations
    @throws( classOf[java.lang.Throwable] )
  47. val flashAttention: BooleanParam

    Definition Classes
    HasLlamaCppModelProperties
  48. val frequencyPenalty: FloatParam

  49. def get[T](feature: StructFeature[T]): Option[T]
    Attributes
    protected
    Definition Classes
    HasFeatures
  50. def get[K, V](feature: MapFeature[K, V]): Option[Map[K, V]]
    Attributes
    protected
    Definition Classes
    HasFeatures
  51. def get[T](feature: SetFeature[T]): Option[Set[T]]
    Attributes
    protected
    Definition Classes
    HasFeatures
  52. def get[T](feature: ArrayFeature[T]): Option[Array[T]]
    Attributes
    protected
    Definition Classes
    HasFeatures
  53. final def get[T](param: Param[T]): Option[T]
    Definition Classes
    Params
  54. def getBatchSize: Int

    Size of every batch.

    Size of every batch.

    Definition Classes
    HasBatchedAnnotate
  55. def getCachePrompt: Boolean

  56. def getCaseSensitive: Boolean

  57. def getChatTemplate: String

    Definition Classes
    HasLlamaCppModelProperties
  58. final def getClass(): Class[_]
    Definition Classes
    AnyRef → Any
    Annotations
    @native()
  59. final def getDefault[T](param: Param[T]): Option[T]
    Definition Classes
    Params
  60. def getDefragmentationThreshold: Float

    Definition Classes
    HasLlamaCppModelProperties
  61. def getDisableLog: Boolean

    Definition Classes
    HasLlamaCppModelProperties
  62. def getDisableTokenIds: Array[Int]

  63. def getDynamicTemperatureExponent: Float

  64. def getDynamicTemperatureRange: Float

  65. def getEngine: String

    Definition Classes
    HasEngine
  66. def getEntityTypes: Array[String]

  67. def getFewShotExamples: Array[(String, String)]

  68. def getFlashAttention: Boolean

    Definition Classes
    HasLlamaCppModelProperties
  69. def getFrequencyPenalty: Float

  70. def getGrammar: String

  71. def getIgnoreEos: Boolean

  72. def getInferenceParameters: InferenceParameters
    Attributes
    protected
    Definition Classes
    HasLlamaCppInferenceProperties
  73. def getInputCols: Array[String]

    returns

    input annotations columns currently used

    Definition Classes
    HasInputAnnotationCols
  74. def getInputPrefix: String

  75. def getInputSuffix: String

  76. def getLazyAnnotator: Boolean
    Definition Classes
    CanBeLazy
  77. def getLogVerbosity: Int
    Definition Classes
    HasLlamaCppModelProperties
  78. def getMainGpu: Int

    Definition Classes
    HasLlamaCppModelProperties
  79. def getMetadata: String

    Get the metadata for the model

    Get the metadata for the model

    Definition Classes
    HasLlamaCppModelProperties
  80. def getMetadataMap: Map[String, Map[String, String]]
    Definition Classes
    HasLlamaCppModelProperties
  81. def getMinKeep: Int

  82. def getMinP: Float

  83. def getMiroStat: String

  84. def getMiroStatEta: Float

  85. def getMiroStatTau: Float

  86. def getModelDraft: String

    Definition Classes
    HasLlamaCppModelProperties
  87. def getModelIfNotSet: GGUFWrapper

  88. def getModelParameters: ModelParameters
    Attributes
    protected
    Definition Classes
    HasLlamaCppModelProperties
  89. def getNBatch: Int

    Definition Classes
    HasLlamaCppModelProperties
  90. def getNCtx: Int

    Definition Classes
    HasLlamaCppModelProperties
  91. def getNDraft: Int

    Definition Classes
    HasLlamaCppModelProperties
  92. def getNGpuLayers: Int

    Definition Classes
    HasLlamaCppModelProperties
  93. def getNGpuLayersDraft: Int

    Definition Classes
    HasLlamaCppModelProperties
  94. def getNKeep: Int

  95. def getNPredict: Int
  96. def getNProbs: Int

  97. def getNThreads: Int

    Definition Classes
    HasLlamaCppModelProperties
  98. def getNThreadsBatch: Int

    Definition Classes
    HasLlamaCppModelProperties
  99. def getNUbatch: Int

    Definition Classes
    HasLlamaCppModelProperties
  100. def getNoKvOffload: Boolean

    Definition Classes
    HasLlamaCppModelProperties
  101. def getNuma: String

    Definition Classes
    HasLlamaCppModelProperties
  102. final def getOrDefault[T](param: Param[T]): T
    Definition Classes
    Params
  103. final def getOutputCol: String

    Gets annotation column name going to generate

    Gets annotation column name going to generate

    Definition Classes
    HasOutputAnnotationCol
  104. def getParam(paramName: String): Param[Any]
    Definition Classes
    Params
  105. def getPenalizeNl: Boolean

  106. def getPenaltyPrompt: String

  107. def getPresencePenalty: Float

  108. def getPromptTemplate: String

  109. def getReasoningBudget: Int

    Definition Classes
    HasLlamaCppModelProperties
  110. def getRepeatLastN: Int

  111. def getRepeatPenalty: Float

  112. def getRopeFreqBase: Float

    Definition Classes
    HasLlamaCppModelProperties
  113. def getRopeFreqScale: Float

    Definition Classes
    HasLlamaCppModelProperties
  114. def getRopeScalingType: String

    Definition Classes
    HasLlamaCppModelProperties
  115. def getSamplers: Array[String]

  116. def getSeed: Int

  117. def getSplitMode: String

    Definition Classes
    HasLlamaCppModelProperties
  118. def getStopStrings: Array[String]

  119. def getSystemPrompt: String

    Definition Classes
    HasLlamaCppModelProperties
  120. def getTemperature: Float

  121. def getTfsZ: Float

  122. def getTokenBias: Map[String, Float]

  123. def getTokenIdBias: Map[Int, Float]

  124. def getTopK: Int

  125. def getTopP: Float

  126. def getTypicalP: Float

  127. def getUseChatTemplate: Boolean

  128. def getUseMlock: Boolean

    Definition Classes
    HasLlamaCppModelProperties
  129. def getUseMmap: Boolean

    Definition Classes
    HasLlamaCppModelProperties
  130. def getYarnAttnFactor: Float

    Definition Classes
    HasLlamaCppModelProperties
  131. def getYarnBetaFast: Float

    Definition Classes
    HasLlamaCppModelProperties
  132. def getYarnBetaSlow: Float

    Definition Classes
    HasLlamaCppModelProperties
  133. def getYarnExtFactor: Float

    Definition Classes
    HasLlamaCppModelProperties
  134. def getYarnOrigCtx: Int

    Definition Classes
    HasLlamaCppModelProperties
  135. val gpuSplitMode: Param[String]

    Set how to split the model across GPUs

    Set how to split the model across GPUs

    • NONE: No GPU split
    • LAYER: Split the model across GPUs by layer
    • ROW: Split the model across GPUs by rows
    Definition Classes
    HasLlamaCppModelProperties
  136. val grammar: Param[String]

  137. final def hasDefault[T](param: Param[T]): Boolean
    Definition Classes
    Params
  138. def hasParam(paramName: String): Boolean
    Definition Classes
    Params
  139. def hasParent: Boolean
    Definition Classes
    Model
  140. def hashCode(): Int
    Definition Classes
    AnyRef → Any
    Annotations
    @native()
  141. val ignoreEos: BooleanParam

  142. def initializeLogIfNecessary(isInterpreter: Boolean, silent: Boolean): Boolean
    Attributes
    protected
    Definition Classes
    Logging
  143. def initializeLogIfNecessary(isInterpreter: Boolean): Unit
    Attributes
    protected
    Definition Classes
    Logging
  144. val inputAnnotatorTypes: Array[String]

    Input Annotator Type: DOCUMENT

    Input Annotator Type: DOCUMENT

    Definition Classes
    LLMEntityExtractorHasInputAnnotationCols
  145. final val inputCols: StringArrayParam

    columns that contain annotations necessary to run this annotator AnnotatorType is used both as input and output columns if not specified

    columns that contain annotations necessary to run this annotator AnnotatorType is used both as input and output columns if not specified

    Attributes
    protected
    Definition Classes
    HasInputAnnotationCols
  146. val inputPrefix: Param[String]

  147. val inputSuffix: Param[String]

  148. final def isDefined(param: Param[_]): Boolean
    Definition Classes
    Params
  149. final def isInstanceOf[T0]: Boolean
    Definition Classes
    Any
  150. final def isSet(param: Param[_]): Boolean
    Definition Classes
    Params
  151. def isTraceEnabled(): Boolean
    Attributes
    protected
    Definition Classes
    Logging
  152. val lazyAnnotator: BooleanParam
    Definition Classes
    CanBeLazy
  153. def log: Logger
    Attributes
    protected
    Definition Classes
    Logging
  154. def logDebug(msg: ⇒ String, throwable: Throwable): Unit
    Attributes
    protected
    Definition Classes
    Logging
  155. def logDebug(msg: ⇒ String): Unit
    Attributes
    protected
    Definition Classes
    Logging
  156. def logError(msg: ⇒ String, throwable: Throwable): Unit
    Attributes
    protected
    Definition Classes
    Logging
  157. def logError(msg: ⇒ String): Unit
    Attributes
    protected
    Definition Classes
    Logging
  158. def logInfo(msg: ⇒ String, throwable: Throwable): Unit
    Attributes
    protected
    Definition Classes
    Logging
  159. def logInfo(msg: ⇒ String): Unit
    Attributes
    protected
    Definition Classes
    Logging
  160. def logName: String
    Attributes
    protected
    Definition Classes
    Logging
  161. def logTrace(msg: ⇒ String, throwable: Throwable): Unit
    Attributes
    protected
    Definition Classes
    Logging
  162. def logTrace(msg: ⇒ String): Unit
    Attributes
    protected
    Definition Classes
    Logging
  163. val logVerbosity: IntParam

    Definition Classes
    HasLlamaCppModelProperties
  164. def logWarning(msg: ⇒ String, throwable: Throwable): Unit
    Attributes
    protected
    Definition Classes
    Logging
  165. def logWarning(msg: ⇒ String): Unit
    Attributes
    protected
    Definition Classes
    Logging
  166. val logger: Logger
    Attributes
    protected
    Definition Classes
    HasLlamaCppModelProperties
  167. val mainGpu: IntParam

    Definition Classes
    HasLlamaCppModelProperties
  168. val metadata: ProtectedParam[String]
    Definition Classes
    HasLlamaCppModelProperties
  169. val minKeep: IntParam

  170. val minP: FloatParam

  171. val miroStat: Param[String]

  172. val miroStatEta: FloatParam

  173. val miroStatTau: FloatParam

  174. val modelDraft: Param[String]

    Definition Classes
    HasLlamaCppModelProperties
  175. def msgHelper(schema: StructType): String
    Attributes
    protected
    Definition Classes
    HasInputAnnotationCols
  176. val nBatch: IntParam

    Definition Classes
    HasLlamaCppModelProperties
  177. val nCtx: IntParam

    Definition Classes
    HasLlamaCppModelProperties
  178. val nDraft: IntParam

    Definition Classes
    HasLlamaCppModelProperties
  179. val nGpuLayers: IntParam

    Definition Classes
    HasLlamaCppModelProperties
  180. val nGpuLayersDraft: IntParam

    Definition Classes
    HasLlamaCppModelProperties
  181. val nKeep: IntParam

  182. val nPredict: IntParam

  183. val nProbs: IntParam

  184. val nThreads: IntParam

    Definition Classes
    HasLlamaCppModelProperties
  185. val nThreadsBatch: IntParam

    Definition Classes
    HasLlamaCppModelProperties
  186. val nUbatch: IntParam

    Definition Classes
    HasLlamaCppModelProperties
  187. final def ne(arg0: AnyRef): Boolean
    Definition Classes
    AnyRef
  188. val noKvOffload: BooleanParam

    Definition Classes
    HasLlamaCppModelProperties
  189. final def notify(): Unit
    Definition Classes
    AnyRef
    Annotations
    @native()
  190. final def notifyAll(): Unit
    Definition Classes
    AnyRef
    Annotations
    @native()
  191. val numaStrategy: Param[String]

    Set optimization strategies that help on some NUMA systems (if available)

    Set optimization strategies that help on some NUMA systems (if available)

    Available Strategies:

    • DISABLED: No NUMA optimizations
    • DISTRIBUTE: Spread execution evenly over all
    • ISOLATE: Only spawn threads on CPUs on the node that execution started on
    • NUMA_CTL: Use the CPU map provided by numactl
    • MIRROR: Mirrors the model across NUMA nodes
    Definition Classes
    HasLlamaCppModelProperties
  192. def onWrite(path: String, spark: SparkSession): Unit
  193. val optionalInputAnnotatorTypes: Array[String]
    Definition Classes
    HasInputAnnotationCols
  194. val outputAnnotatorType: AnnotatorType

    Output Annotator Type: CHUNK

    Output Annotator Type: CHUNK

    Definition Classes
    LLMEntityExtractorHasOutputAnnotatorType
  195. final val outputCol: Param[String]
    Attributes
    protected
    Definition Classes
    HasOutputAnnotationCol
  196. lazy val params: Array[Param[_]]
    Definition Classes
    Params
  197. var parent: Estimator[LLMEntityExtractor]
    Definition Classes
    Model
  198. val penalizeNl: BooleanParam

  199. val penaltyPrompt: Param[String]

  200. val presencePenalty: FloatParam

  201. val promptTemplate: Param[String]

    Custom prompt template for entity extraction.

    Custom prompt template for entity extraction.

    The prompt should include instructions for the LLM to extract entities in JSON format. Use {entityTypes} as a placeholder for the entity types list.

  202. val reasoningBudget: IntParam

    Definition Classes
    HasLlamaCppModelProperties
  203. val repeatLastN: IntParam

  204. val repeatPenalty: FloatParam

  205. val ropeFreqBase: FloatParam

    Definition Classes
    HasLlamaCppModelProperties
  206. val ropeFreqScale: FloatParam

    Definition Classes
    HasLlamaCppModelProperties
  207. val ropeScalingType: Param[String]

    Set the RoPE frequency scaling method, defaults to linear unless specified by the model.

    Set the RoPE frequency scaling method, defaults to linear unless specified by the model.

    • UNSPECIFIED: Don't use any scaling
    • LINEAR: Linear scaling
    • YARN: YaRN RoPE scaling
    Definition Classes
    HasLlamaCppModelProperties
  208. val samplers: StringArrayParam

  209. def save(path: String): Unit
    Definition Classes
    MLWritable
    Annotations
    @Since( "1.6.0" ) @throws( ... )
  210. val seed: IntParam

  211. def set[T](param: ProtectedParam[T], value: T): LLMEntityExtractor.this.type

    Sets the value for a protected Param.

    Sets the value for a protected Param.

    If the parameter was already set, it will not be set again. Default values do not count as a set value and can be overridden.

    T

    Type of the parameter

    param

    Protected parameter to set

    value

    Value for the parameter

    returns

    This object

    Definition Classes
    HasProtectedParams
  212. def set[T](feature: StructFeature[T], value: T): LLMEntityExtractor.this.type
    Attributes
    protected
    Definition Classes
    HasFeatures
  213. def set[K, V](feature: MapFeature[K, V], value: Map[K, V]): LLMEntityExtractor.this.type
    Attributes
    protected
    Definition Classes
    HasFeatures
  214. def set[T](feature: SetFeature[T], value: Set[T]): LLMEntityExtractor.this.type
    Attributes
    protected
    Definition Classes
    HasFeatures
  215. def set[T](feature: ArrayFeature[T], value: Array[T]): LLMEntityExtractor.this.type
    Attributes
    protected
    Definition Classes
    HasFeatures
  216. final def set(paramPair: ParamPair[_]): LLMEntityExtractor.this.type
    Attributes
    protected
    Definition Classes
    Params
  217. final def set(param: String, value: Any): LLMEntityExtractor.this.type
    Attributes
    protected
    Definition Classes
    Params
  218. final def set[T](param: Param[T], value: T): LLMEntityExtractor.this.type
    Definition Classes
    Params
  219. def setBatchSize(size: Int): LLMEntityExtractor.this.type

    Size of every batch.

    Size of every batch.

    Definition Classes
    HasBatchedAnnotate
  220. def setCachePrompt(cachePrompt: Boolean): LLMEntityExtractor.this.type

    Whether to remember the prompt to avoid reprocessing it

    Whether to remember the prompt to avoid reprocessing it

    Definition Classes
    HasLlamaCppInferenceProperties
  221. def setCaseSensitive(value: Boolean): LLMEntityExtractor.this.type

  222. def setChatTemplate(chatTemplate: String): LLMEntityExtractor.this.type

    The chat template to use

    The chat template to use

    Definition Classes
    HasLlamaCppModelProperties
  223. def setDefault[T](feature: StructFeature[T], value: () ⇒ T): LLMEntityExtractor.this.type
    Attributes
    protected
    Definition Classes
    HasFeatures
  224. def setDefault[K, V](feature: MapFeature[K, V], value: () ⇒ Map[K, V]): LLMEntityExtractor.this.type
    Attributes
    protected
    Definition Classes
    HasFeatures
  225. def setDefault[T](feature: SetFeature[T], value: () ⇒ Set[T]): LLMEntityExtractor.this.type
    Attributes
    protected
    Definition Classes
    HasFeatures
  226. def setDefault[T](feature: ArrayFeature[T], value: () ⇒ Array[T]): LLMEntityExtractor.this.type
    Attributes
    protected
    Definition Classes
    HasFeatures
  227. final def setDefault(paramPairs: ParamPair[_]*): LLMEntityExtractor.this.type
    Attributes
    protected
    Definition Classes
    Params
  228. final def setDefault[T](param: Param[T], value: T): LLMEntityExtractor.this.type
    Attributes
    protected[org.apache.spark.ml]
    Definition Classes
    Params
  229. def setDefragmentationThreshold(defragThold: Float): LLMEntityExtractor.this.type

    Set the KV cache defragmentation threshold

    Set the KV cache defragmentation threshold

    Definition Classes
    HasLlamaCppModelProperties
  230. def setDisableLog(disableLog: Boolean): LLMEntityExtractor.this.type

    Definition Classes
    HasLlamaCppModelProperties
  231. def setDisableTokenIds(disableTokenIds: Array[Int]): LLMEntityExtractor.this.type

    Set the token ids to disable in the completion.

    Set the token ids to disable in the completion. This corresponds to setTokenBias with a value of Float.NEGATIVE_INFINITY.

    Definition Classes
    HasLlamaCppInferenceProperties
  232. def setDynamicTemperatureExponent(dynatempExponent: Float): LLMEntityExtractor.this.type

    Set the dynamic temperature exponent

    Set the dynamic temperature exponent

    Definition Classes
    HasLlamaCppInferenceProperties
  233. def setDynamicTemperatureRange(dynatempRange: Float): LLMEntityExtractor.this.type

    Set the dynamic temperature range

    Set the dynamic temperature range

    Definition Classes
    HasLlamaCppInferenceProperties
  234. def setEntityTypes(value: Array[String]): LLMEntityExtractor.this.type

  235. def setExtraInputCols(value: Array[String]): LLMEntityExtractor.this.type
    Definition Classes
    HasInputAnnotationCols
  236. def setFewShotExamples(value: List[List[String]]): LLMEntityExtractor.this.type

    Java/Python-compatible setter for fewShotExamples.

    Java/Python-compatible setter for fewShotExamples.

    When called from PySpark via py4j, Python lists of tuples arrive as java.util.ArrayList[java.util.ArrayList[String]]. This overload converts them to the expected Scala Array[(String, String)].

  237. def setFewShotExamples(value: Array[(String, String)]): LLMEntityExtractor.this.type

  238. def setFlashAttention(flashAttention: Boolean): LLMEntityExtractor.this.type

    Whether to enable Flash Attention

    Whether to enable Flash Attention

    Definition Classes
    HasLlamaCppModelProperties
  239. def setFrequencyPenalty(frequencyPenalty: Float): LLMEntityExtractor.this.type

    Set the repetition alpha frequency penalty

    Set the repetition alpha frequency penalty

    Definition Classes
    HasLlamaCppInferenceProperties
  240. def setGpuSplitMode(splitMode: String): LLMEntityExtractor.this.type

    Set how to split the model across GPUs

    Set how to split the model across GPUs

    • NONE: No GPU split -LAYER: Split the model across GPUs by layer 2. ROW: Split the model across GPUs by rows
    Definition Classes
    HasLlamaCppModelProperties
  241. def setGrammar(grammar: String): LLMEntityExtractor.this.type

    Set BNF-like grammar to constrain generations

    Set BNF-like grammar to constrain generations

    Definition Classes
    HasLlamaCppInferenceProperties
  242. def setIgnoreEos(ignoreEos: Boolean): LLMEntityExtractor.this.type

    Set whether to ignore end of stream token and continue generating (implies --logit-bias 2-inf)

    Set whether to ignore end of stream token and continue generating (implies --logit-bias 2-inf)

    Definition Classes
    HasLlamaCppInferenceProperties
  243. final def setInputCols(value: String*): LLMEntityExtractor.this.type
    Definition Classes
    HasInputAnnotationCols
  244. def setInputCols(value: Array[String]): LLMEntityExtractor.this.type

    Overrides required annotators column if different than default

    Overrides required annotators column if different than default

    Definition Classes
    HasInputAnnotationCols
  245. def setInputPrefix(inputPrefix: String): LLMEntityExtractor.this.type

    Set the prompt to start generation with

    Set the prompt to start generation with

    Definition Classes
    HasLlamaCppInferenceProperties
  246. def setInputSuffix(inputSuffix: String): LLMEntityExtractor.this.type

    Set a suffix for infilling

    Set a suffix for infilling

    Definition Classes
    HasLlamaCppInferenceProperties
  247. def setLazyAnnotator(value: Boolean): LLMEntityExtractor.this.type
    Definition Classes
    CanBeLazy
  248. def setLogVerbosity(logVerbosity: Int): LLMEntityExtractor.this.type

    Set the verbosity threshold.

    Set the verbosity threshold. Messages with a higher verbosity will be ignored.

    Values map to the following:

    • GGML_LOG_LEVEL_NONE = 0
    • GGML_LOG_LEVEL_DEBUG = 1
    • GGML_LOG_LEVEL_INFO = 2
    • GGML_LOG_LEVEL_WARN = 3
    • GGML_LOG_LEVEL_ERROR = 4
    • GGML_LOG_LEVEL_CONT = 5 (continue previous log)
    Definition Classes
    HasLlamaCppModelProperties
  249. def setMainGpu(mainGpu: Int): LLMEntityExtractor.this.type

    Set the GPU that is used for scratch and small tensors

    Set the GPU that is used for scratch and small tensors

    Definition Classes
    HasLlamaCppModelProperties
  250. def setMetadata(metadata: String): LLMEntityExtractor.this.type

    Set the metadata for the model

    Set the metadata for the model

    Definition Classes
    HasLlamaCppModelProperties
  251. def setMinKeep(minKeep: Int): LLMEntityExtractor.this.type

    Set the amount of tokens the samplers should return at least (0 = disabled)

    Set the amount of tokens the samplers should return at least (0 = disabled)

    Definition Classes
    HasLlamaCppInferenceProperties
  252. def setMinP(minP: Float): LLMEntityExtractor.this.type

    Set min-p sampling

    Set min-p sampling

    Definition Classes
    HasLlamaCppInferenceProperties
  253. def setMiroStat(mirostat: String): LLMEntityExtractor.this.type

    Set MiroStat sampling strategies.

    Set MiroStat sampling strategies.

    • DISABLED: No MiroStat
    • V1: MiroStat V1
    • V2: MiroStat V2
    Definition Classes
    HasLlamaCppInferenceProperties
  254. def setMiroStatEta(mirostatEta: Float): LLMEntityExtractor.this.type

    Set the MiroStat learning rate, parameter eta

    Set the MiroStat learning rate, parameter eta

    Definition Classes
    HasLlamaCppInferenceProperties
  255. def setMiroStatTau(mirostatTau: Float): LLMEntityExtractor.this.type

    Set the MiroStat target entropy, parameter tau

    Set the MiroStat target entropy, parameter tau

    Definition Classes
    HasLlamaCppInferenceProperties
  256. def setModelDraft(modelDraft: String): LLMEntityExtractor.this.type

    Set the draft model for speculative decoding

    Set the draft model for speculative decoding

    Definition Classes
    HasLlamaCppModelProperties
  257. def setModelIfNotSet(spark: SparkSession, wrapper: GGUFWrapper): LLMEntityExtractor.this.type

  258. def setNBatch(nBatch: Int): LLMEntityExtractor.this.type

    Set the logical batch size for prompt processing (must be >=32 to use BLAS)

    Set the logical batch size for prompt processing (must be >=32 to use BLAS)

    Definition Classes
    HasLlamaCppModelProperties
  259. def setNCtx(nCtx: Int): LLMEntityExtractor.this.type

    Set the size of the prompt context

    Set the size of the prompt context

    Definition Classes
    HasLlamaCppModelProperties
  260. def setNDraft(nDraft: Int): LLMEntityExtractor.this.type

    Set the number of tokens to draft for speculative decoding

    Set the number of tokens to draft for speculative decoding

    Definition Classes
    HasLlamaCppModelProperties
  261. def setNGpuLayers(nGpuLayers: Int): LLMEntityExtractor.this.type

    Set the number of layers to store in VRAM (-1 - use default)

    Set the number of layers to store in VRAM (-1 - use default)

    Definition Classes
    HasLlamaCppModelProperties
  262. def setNGpuLayersDraft(nGpuLayersDraft: Int): LLMEntityExtractor.this.type

    Set the number of layers to store in VRAM for the draft model (-1 - use default)

    Set the number of layers to store in VRAM for the draft model (-1 - use default)

    Definition Classes
    HasLlamaCppModelProperties
  263. def setNKeep(nKeep: Int): LLMEntityExtractor.this.type

    Set the number of tokens to keep from the initial prompt

    Set the number of tokens to keep from the initial prompt

    Definition Classes
    HasLlamaCppInferenceProperties
  264. def setNPredict(nPredict: Int): LLMEntityExtractor.this.type

    Set the number of tokens to predict

    Set the number of tokens to predict

    Definition Classes
    HasLlamaCppInferenceProperties
  265. def setNProbs(nProbs: Int): LLMEntityExtractor.this.type

    Set the amount top tokens probabilities to output if greater than 0.

    Set the amount top tokens probabilities to output if greater than 0.

    Definition Classes
    HasLlamaCppInferenceProperties
  266. def setNThreads(nThreads: Int): LLMEntityExtractor.this.type

    Set the number of threads to use during generation

    Set the number of threads to use during generation

    Definition Classes
    HasLlamaCppModelProperties
  267. def setNThreadsBatch(nThreadsBatch: Int): LLMEntityExtractor.this.type

    Set the number of threads to use during batch and prompt processing

    Set the number of threads to use during batch and prompt processing

    Definition Classes
    HasLlamaCppModelProperties
  268. def setNUbatch(nUbatch: Int): LLMEntityExtractor.this.type

    Set the physical batch size for prompt processing (must be >=32 to use BLAS)

    Set the physical batch size for prompt processing (must be >=32 to use BLAS)

    Definition Classes
    HasLlamaCppModelProperties
  269. def setNoKvOffload(noKvOffload: Boolean): LLMEntityExtractor.this.type

    Whether to disable KV offload

    Whether to disable KV offload

    Definition Classes
    HasLlamaCppModelProperties
  270. def setNumaStrategy(numa: String): LLMEntityExtractor.this.type

    Set optimization strategies that help on some NUMA systems (if available)

    Set optimization strategies that help on some NUMA systems (if available)

    Available Strategies:

    • DISABLED: No NUMA optimizations
    • DISTRIBUTE: spread execution evenly over all
    • ISOLATE: only spawn threads on CPUs on the node that execution started on
    • NUMA_CTL: use the CPU map provided by numactl
    • MIRROR: Mirrors the model across NUMA nodes
    Definition Classes
    HasLlamaCppModelProperties
  271. final def setOutputCol(value: String): LLMEntityExtractor.this.type

    Overrides annotation column name when transforming

    Overrides annotation column name when transforming

    Definition Classes
    HasOutputAnnotationCol
  272. def setParent(parent: Estimator[LLMEntityExtractor]): LLMEntityExtractor
    Definition Classes
    Model
  273. def setPenalizeNl(penalizeNl: Boolean): LLMEntityExtractor.this.type

    Set whether to penalize newline tokens

    Set whether to penalize newline tokens

    Definition Classes
    HasLlamaCppInferenceProperties
  274. def setPenaltyPrompt(penaltyPrompt: String): LLMEntityExtractor.this.type

    Override which part of the prompt is penalized for repetition.

    Override which part of the prompt is penalized for repetition.

    Definition Classes
    HasLlamaCppInferenceProperties
  275. def setPresencePenalty(presencePenalty: Float): LLMEntityExtractor.this.type

    Set the repetition alpha presence penalty

    Set the repetition alpha presence penalty

    Definition Classes
    HasLlamaCppInferenceProperties
  276. def setPromptTemplate(value: String): LLMEntityExtractor.this.type

  277. def setReasoningBudget(reasoningBudget: Int): LLMEntityExtractor.this.type

    Controls the amount of thinking allowed; currently only one of: -1 for unrestricted thinking budget, or 0 to disable thinking (default: -1)

    Controls the amount of thinking allowed; currently only one of: -1 for unrestricted thinking budget, or 0 to disable thinking (default: -1)

    Definition Classes
    HasLlamaCppModelProperties
  278. def setRepeatLastN(repeatLastN: Int): LLMEntityExtractor.this.type

    Set the last n tokens to consider for penalties

    Set the last n tokens to consider for penalties

    Definition Classes
    HasLlamaCppInferenceProperties
  279. def setRepeatPenalty(repeatPenalty: Float): LLMEntityExtractor.this.type

    Set the penalty of repeated sequences of tokens

    Set the penalty of repeated sequences of tokens

    Definition Classes
    HasLlamaCppInferenceProperties
  280. def setRopeFreqBase(ropeFreqBase: Float): LLMEntityExtractor.this.type

    Set the RoPE base frequency, used by NTK-aware scaling

    Set the RoPE base frequency, used by NTK-aware scaling

    Definition Classes
    HasLlamaCppModelProperties
  281. def setRopeFreqScale(ropeFreqScale: Float): LLMEntityExtractor.this.type

    Set the RoPE frequency scaling factor, expands context by a factor of 1/N

    Set the RoPE frequency scaling factor, expands context by a factor of 1/N

    Definition Classes
    HasLlamaCppModelProperties
  282. def setRopeScalingType(ropeScalingType: String): LLMEntityExtractor.this.type

    Set the RoPE frequency scaling method, defaults to linear unless specified by the model.

    Set the RoPE frequency scaling method, defaults to linear unless specified by the model.

    • NONE: Don't use any scaling
    • LINEAR: Linear scaling
    • YARN: YaRN RoPE scaling
    Definition Classes
    HasLlamaCppModelProperties
  283. def setSamplers(samplers: Array[String]): LLMEntityExtractor.this.type

    Set which samplers to use for token generation in the given order .

    Set which samplers to use for token generation in the given order .

    Available Samplers are:

    • TOP_K: Top-k sampling
    • TFS_Z: Tail free sampling
    • TYPICAL_P: Locally typical sampling p
    • TOP_P: Top-p sampling
    • MIN_P: Min-p sampling
    • TEMPERATURE: Temperature sampling
    Definition Classes
    HasLlamaCppInferenceProperties
  284. def setSeed(seed: Int): LLMEntityExtractor.this.type

    Set the RNG seed

    Set the RNG seed

    Definition Classes
    HasLlamaCppInferenceProperties
  285. def setStopStrings(stopStrings: Array[String]): LLMEntityExtractor.this.type

    Set strings upon seeing which token generation is stopped

    Set strings upon seeing which token generation is stopped

    Definition Classes
    HasLlamaCppInferenceProperties
  286. def setSystemPrompt(systemPrompt: String): LLMEntityExtractor.this.type

    Set a system prompt to use

    Set a system prompt to use

    Definition Classes
    HasLlamaCppModelProperties
  287. def setTemperature(temperature: Float): LLMEntityExtractor.this.type

    Set the temperature

    Set the temperature

    Definition Classes
    HasLlamaCppInferenceProperties
  288. def setTfsZ(tfsZ: Float): LLMEntityExtractor.this.type

    Set tail free sampling, parameter z

    Set tail free sampling, parameter z

    Definition Classes
    HasLlamaCppInferenceProperties
  289. def setTokenBias(tokenBias: HashMap[String, Double]): LLMEntityExtractor.this.type

    Set the tokens to disable during completion.

    Set the tokens to disable during completion. (Override for PySpark)

    Definition Classes
    HasLlamaCppInferenceProperties
  290. def setTokenBias(tokenBias: Map[String, Float]): LLMEntityExtractor.this.type

    Set the tokens to disable during completion.

    Set the tokens to disable during completion.

    Definition Classes
    HasLlamaCppInferenceProperties
  291. def setTokenIdBias(tokenIdBias: HashMap[Integer, Double]): LLMEntityExtractor.this.type

    Set the token ids to disable in the completion.

    Set the token ids to disable in the completion. (Override for PySpark)

    Definition Classes
    HasLlamaCppInferenceProperties
  292. def setTokenIdBias(tokenIdBias: Map[Int, Float]): LLMEntityExtractor.this.type

    Set the token ids to disable in the completion.

    Set the token ids to disable in the completion.

    Definition Classes
    HasLlamaCppInferenceProperties
  293. def setTopK(topK: Int): LLMEntityExtractor.this.type

    Set top-k sampling

    Set top-k sampling

    Definition Classes
    HasLlamaCppInferenceProperties
  294. def setTopP(topP: Float): LLMEntityExtractor.this.type

    Set top-p sampling

    Set top-p sampling

    Definition Classes
    HasLlamaCppInferenceProperties
  295. def setTypicalP(typicalP: Float): LLMEntityExtractor.this.type

    Set locally typical sampling, parameter p

    Set locally typical sampling, parameter p

    Definition Classes
    HasLlamaCppInferenceProperties
  296. def setUseChatTemplate(useChatTemplate: Boolean): LLMEntityExtractor.this.type

    Set whether or not generate should apply a chat template

    Set whether or not generate should apply a chat template

    Definition Classes
    HasLlamaCppInferenceProperties
  297. def setUseMlock(useMlock: Boolean): LLMEntityExtractor.this.type

    Whether to force the system to keep model in RAM rather than swapping or compressing

    Whether to force the system to keep model in RAM rather than swapping or compressing

    Definition Classes
    HasLlamaCppModelProperties
  298. def setUseMmap(useMmap: Boolean): LLMEntityExtractor.this.type

    Whether to use memory-map model (faster load but may increase pageouts if not using mlock)

    Whether to use memory-map model (faster load but may increase pageouts if not using mlock)

    Definition Classes
    HasLlamaCppModelProperties
  299. def setYarnAttnFactor(yarnAttnFactor: Float): LLMEntityExtractor.this.type

    Set the YaRN scale sqrt(t) or attention magnitude

    Set the YaRN scale sqrt(t) or attention magnitude

    Definition Classes
    HasLlamaCppModelProperties
  300. def setYarnBetaFast(yarnBetaFast: Float): LLMEntityExtractor.this.type

    Set the YaRN low correction dim or beta

    Set the YaRN low correction dim or beta

    Definition Classes
    HasLlamaCppModelProperties
  301. def setYarnBetaSlow(yarnBetaSlow: Float): LLMEntityExtractor.this.type

    Set the YaRN high correction dim or alpha

    Set the YaRN high correction dim or alpha

    Definition Classes
    HasLlamaCppModelProperties
  302. def setYarnExtFactor(yarnExtFactor: Float): LLMEntityExtractor.this.type

    Set the YaRN extrapolation mix factor

    Set the YaRN extrapolation mix factor

    Definition Classes
    HasLlamaCppModelProperties
  303. def setYarnOrigCtx(yarnOrigCtx: Int): LLMEntityExtractor.this.type

    Set the YaRN original context size of model

    Set the YaRN original context size of model

    Definition Classes
    HasLlamaCppModelProperties
  304. val stopStrings: StringArrayParam

  305. final def synchronized[T0](arg0: ⇒ T0): T0
    Definition Classes
    AnyRef
  306. val systemPrompt: Param[String]

    Definition Classes
    HasLlamaCppModelProperties
  307. val temperature: FloatParam

  308. val tfsZ: FloatParam

  309. def toString(): String
    Definition Classes
    Identifiable → AnyRef → Any
  310. val tokenBias: StructFeature[Map[String, Float]]

  311. val tokenIdBias: StructFeature[Map[Int, Float]]
  312. val topK: IntParam

  313. val topP: FloatParam

  314. final def transform(dataset: Dataset[_]): DataFrame

    Given requirements are met, this applies ML transformation within a Pipeline or stand-alone Output annotation will be generated as a new column, previous annotations are still available separately metadata is built at schema level to record annotations structural information outside its content

    Given requirements are met, this applies ML transformation within a Pipeline or stand-alone Output annotation will be generated as a new column, previous annotations are still available separately metadata is built at schema level to record annotations structural information outside its content

    dataset

    Dataset[Row]

    Definition Classes
    AnnotatorModel → Transformer
  315. def transform(dataset: Dataset[_], paramMap: ParamMap): DataFrame
    Definition Classes
    Transformer
    Annotations
    @Since( "2.0.0" )
  316. def transform(dataset: Dataset[_], firstParamPair: ParamPair[_], otherParamPairs: ParamPair[_]*): DataFrame
    Definition Classes
    Transformer
    Annotations
    @Since( "2.0.0" ) @varargs()
  317. final def transformSchema(schema: StructType): StructType

    requirement for pipeline transformation validation.

    requirement for pipeline transformation validation. It is called on fit()

    Definition Classes
    RawAnnotator → PipelineStage
  318. def transformSchema(schema: StructType, logging: Boolean): StructType
    Attributes
    protected
    Definition Classes
    PipelineStage
    Annotations
    @DeveloperApi()
  319. val typicalP: FloatParam

  320. val uid: String
    Definition Classes
    LLMEntityExtractor → Identifiable
  321. val useChatTemplate: BooleanParam

  322. val useMlock: BooleanParam

    Definition Classes
    HasLlamaCppModelProperties
  323. val useMmap: BooleanParam

    Definition Classes
    HasLlamaCppModelProperties
  324. def validate(schema: StructType): Boolean

    takes a Dataset and checks to see if all the required annotation types are present.

    takes a Dataset and checks to see if all the required annotation types are present.

    schema

    to be validated

    returns

    True if all the required types are present, else false

    Attributes
    protected
    Definition Classes
    RawAnnotator
  325. final def wait(): Unit
    Definition Classes
    AnyRef
    Annotations
    @throws( ... )
  326. final def wait(arg0: Long, arg1: Int): Unit
    Definition Classes
    AnyRef
    Annotations
    @throws( ... )
  327. final def wait(arg0: Long): Unit
    Definition Classes
    AnyRef
    Annotations
    @throws( ... ) @native()
  328. def wrapColumnMetadata(col: Column): Column
    Attributes
    protected
    Definition Classes
    RawAnnotator
  329. def write: MLWriter
    Definition Classes
    ParamsAndFeaturesWritable → DefaultParamsWritable → MLWritable
  330. val yarnAttnFactor: FloatParam

    Definition Classes
    HasLlamaCppModelProperties
  331. val yarnBetaFast: FloatParam

    Definition Classes
    HasLlamaCppModelProperties
  332. val yarnBetaSlow: FloatParam

    Definition Classes
    HasLlamaCppModelProperties
  333. val yarnExtFactor: FloatParam

    Definition Classes
    HasLlamaCppModelProperties
  334. val yarnOrigCtx: IntParam

    Definition Classes
    HasLlamaCppModelProperties

Inherited from HasEngine

Inherited from HasProtectedParams

Inherited from CanBeLazy

Inherited from HasOutputAnnotationCol

Inherited from HasInputAnnotationCols

Inherited from HasOutputAnnotatorType

Inherited from ParamsAndFeaturesWritable

Inherited from HasFeatures

Inherited from DefaultParamsWritable

Inherited from MLWritable

Inherited from Model[LLMEntityExtractor]

Inherited from Transformer

Inherited from PipelineStage

Inherited from Logging

Inherited from Params

Inherited from Serializable

Inherited from Serializable

Inherited from Identifiable

Inherited from AnyRef

Inherited from Any

Parameters

A list of (hyper-)parameter keys this annotator can take. Users can set and get the parameter values through setters and getters, respectively.

Annotator types

Required input and expected output annotator types

Members

Parameter setters

Parameter getters