Packages

class AutoGGUFModel extends AnnotatorModel[AutoGGUFModel] with HasBatchedAnnotate[AutoGGUFModel] with HasEngine with HasLlamaCppModelProperties with HasLlamaCppInferenceProperties with HasProtectedParams

Annotator that uses the llama.cpp library to generate text completions with large language models.

For settable parameters, and their explanations, see HasLlamaCppInferenceProperties, HasLlamaCppModelProperties and refer to the llama.cpp documentation of server.cpp for more information.

If the parameters are not set, the annotator will default to use the parameters provided by the model.

Pretrained models can be loaded with pretrained of the companion object:

val autoGGUFModel = AutoGGUFModel.pretrained()
  .setInputCols("document")
  .setOutputCol("completions")

The default model is "phi3.5_mini_4k_instruct_q4_gguf", if no name is provided.

For available pretrained models please see the Models Hub.

For extended examples of usage, see the AutoGGUFModelTest and the example notebook.

Note

To use GPU inference with this annotator, make sure to use the Spark NLP GPU package and set the number of GPU layers with the setNGpuLayers method.

When using larger models, we recommend adjusting GPU usage with setNCtx and setNGpuLayers according to your hardware to avoid out-of-memory errors.

Example

import com.johnsnowlabs.nlp.base._
import com.johnsnowlabs.nlp.annotator._
import org.apache.spark.ml.Pipeline
import spark.implicits._

val document = new DocumentAssembler()
  .setInputCol("text")
  .setOutputCol("document")

val autoGGUFModel = AutoGGUFModel
  .pretrained()
  .setInputCols("document")
  .setOutputCol("completions")
  .setBatchSize(4)
  .setNPredict(20)
  .setNGpuLayers(99)
  .setTemperature(0.4f)
  .setTopK(40)
  .setTopP(0.9f)
  .setPenalizeNl(true)

val pipeline = new Pipeline().setStages(Array(document, autoGGUFModel))

val data = Seq("Hello, I am a").toDF("text")
val result = pipeline.fit(data).transform(data)
result.select("completions").show(truncate = false)
+-----------------------------------------------------------------------------------------------------------------------------------+
|completions                                                                                                                        |
+-----------------------------------------------------------------------------------------------------------------------------------+
|[{document, 0, 78,  new user.  I am currently working on a project and I need to create a list of , {prompt -> Hello, I am a}, []}]|
+-----------------------------------------------------------------------------------------------------------------------------------+
Ordering
  1. Grouped
  2. Alphabetic
  3. By Inheritance
Inherited
  1. AutoGGUFModel
  2. HasProtectedParams
  3. HasLlamaCppInferenceProperties
  4. HasLlamaCppModelProperties
  5. HasEngine
  6. HasBatchedAnnotate
  7. AnnotatorModel
  8. CanBeLazy
  9. RawAnnotator
  10. HasOutputAnnotationCol
  11. HasInputAnnotationCols
  12. HasOutputAnnotatorType
  13. ParamsAndFeaturesWritable
  14. HasFeatures
  15. DefaultParamsWritable
  16. MLWritable
  17. Model
  18. Transformer
  19. PipelineStage
  20. Logging
  21. Params
  22. Serializable
  23. Serializable
  24. Identifiable
  25. AnyRef
  26. Any
  1. Hide All
  2. Show All
Visibility
  1. Public
  2. All

Instance Constructors

  1. new AutoGGUFModel()

    Annotator reference id.

    Annotator reference id. Used to identify elements in metadata or to refer to this annotator type

  2. new AutoGGUFModel(uid: String)

    uid

    required uid for storing annotator to disk

Type Members

  1. implicit class ProtectedParam[T] extends Param[T]
    Definition Classes
    HasProtectedParams
  2. type AnnotationContent = Seq[Row]

    internal types to show Rows as a relevant StructType Should be deleted once Spark releases UserDefinedTypes to @developerAPI

    internal types to show Rows as a relevant StructType Should be deleted once Spark releases UserDefinedTypes to @developerAPI

    Attributes
    protected
    Definition Classes
    AnnotatorModel
  3. type AnnotatorType = String
    Definition Classes
    HasOutputAnnotatorType

Value Members

  1. final def !=(arg0: Any): Boolean
    Definition Classes
    AnyRef → Any
  2. final def ##(): Int
    Definition Classes
    AnyRef → Any
  3. final def $[T](param: Param[T]): T
    Attributes
    protected
    Definition Classes
    Params
  4. def $$[T](feature: StructFeature[T]): T
    Attributes
    protected
    Definition Classes
    HasFeatures
  5. def $$[K, V](feature: MapFeature[K, V]): Map[K, V]
    Attributes
    protected
    Definition Classes
    HasFeatures
  6. def $$[T](feature: SetFeature[T]): Set[T]
    Attributes
    protected
    Definition Classes
    HasFeatures
  7. def $$[T](feature: ArrayFeature[T]): Array[T]
    Attributes
    protected
    Definition Classes
    HasFeatures
  8. final def ==(arg0: Any): Boolean
    Definition Classes
    AnyRef → Any
  9. def _transform(dataset: Dataset[_], recursivePipeline: Option[PipelineModel]): DataFrame
    Attributes
    protected
    Definition Classes
    AnnotatorModel
  10. def afterAnnotate(dataset: DataFrame): DataFrame
    Attributes
    protected
    Definition Classes
    AnnotatorModel
  11. final def asInstanceOf[T0]: T0
    Definition Classes
    Any
  12. def batchAnnotate(batchedAnnotations: Seq[Array[Annotation]]): Seq[Seq[Annotation]]

    Completes the batch of annotations.

    Completes the batch of annotations.

    batchedAnnotations

    Annotations (single element arrays) in batches

    returns

    Completed text sequences

    Definition Classes
    AutoGGUFModelHasBatchedAnnotate
  13. def batchProcess(rows: Iterator[_]): Iterator[Row]
    Definition Classes
    HasBatchedAnnotate
  14. val batchSize: IntParam

    Size of every batch (Default depends on model).

    Size of every batch (Default depends on model).

    Definition Classes
    HasBatchedAnnotate
  15. def beforeAnnotate(dataset: Dataset[_]): Dataset[_]
    Attributes
    protected
    Definition Classes
    AnnotatorModel
  16. val cachePrompt: BooleanParam

  17. val chatTemplate: Param[String]

    Definition Classes
    HasLlamaCppModelProperties
  18. final def checkSchema(schema: StructType, inputAnnotatorType: String): Boolean
    Attributes
    protected
    Definition Classes
    HasInputAnnotationCols
  19. final def clear(param: Param[_]): AutoGGUFModel.this.type
    Definition Classes
    Params
  20. def clone(): AnyRef
    Attributes
    protected[lang]
    Definition Classes
    AnyRef
    Annotations
    @throws( ... ) @native()
  21. def copy(extra: ParamMap): AutoGGUFModel

    requirement for annotators copies

    requirement for annotators copies

    Definition Classes
    RawAnnotator → Model → Transformer → PipelineStage → Params
  22. def copyValues[T <: Params](to: T, extra: ParamMap): T
    Attributes
    protected
    Definition Classes
    Params
  23. final def defaultCopy[T <: Params](extra: ParamMap): T
    Attributes
    protected
    Definition Classes
    Params
  24. val defaultGpuLayers: Int
    Attributes
    protected
    Definition Classes
    HasLlamaCppModelProperties
  25. val defaultMainGpu: Int
    Attributes
    protected
    Definition Classes
    HasLlamaCppModelProperties
  26. val defragmentationThreshold: FloatParam

    Definition Classes
    HasLlamaCppModelProperties
  27. val disableTokenIds: IntArrayParam

  28. val dynamicTemperatureExponent: FloatParam

  29. val dynamicTemperatureRange: FloatParam

  30. val embedding: BooleanParam

    Definition Classes
    HasLlamaCppModelProperties
  31. val engine: Param[String]

    This param is set internally once via loadSavedModel.

    This param is set internally once via loadSavedModel. That's why there is no setter

    Definition Classes
    HasEngine
  32. final def eq(arg0: AnyRef): Boolean
    Definition Classes
    AnyRef
  33. def equals(arg0: Any): Boolean
    Definition Classes
    AnyRef → Any
  34. def explainParam(param: Param[_]): String
    Definition Classes
    Params
  35. def explainParams(): String
    Definition Classes
    Params
  36. def extraValidate(structType: StructType): Boolean
    Attributes
    protected
    Definition Classes
    RawAnnotator
  37. def extraValidateMsg: String

    Override for additional custom schema checks

    Override for additional custom schema checks

    Attributes
    protected
    Definition Classes
    RawAnnotator
  38. final def extractParamMap(): ParamMap
    Definition Classes
    Params
  39. final def extractParamMap(extra: ParamMap): ParamMap
    Definition Classes
    Params
  40. val features: ArrayBuffer[Feature[_, _, _]]
    Definition Classes
    HasFeatures
  41. def finalize(): Unit
    Attributes
    protected[lang]
    Definition Classes
    AnyRef
    Annotations
    @throws( classOf[java.lang.Throwable] )
  42. val flashAttention: BooleanParam

    Definition Classes
    HasLlamaCppModelProperties
  43. val frequencyPenalty: FloatParam

  44. def get[T](feature: StructFeature[T]): Option[T]
    Attributes
    protected
    Definition Classes
    HasFeatures
  45. def get[K, V](feature: MapFeature[K, V]): Option[Map[K, V]]
    Attributes
    protected
    Definition Classes
    HasFeatures
  46. def get[T](feature: SetFeature[T]): Option[Set[T]]
    Attributes
    protected
    Definition Classes
    HasFeatures
  47. def get[T](feature: ArrayFeature[T]): Option[Array[T]]
    Attributes
    protected
    Definition Classes
    HasFeatures
  48. final def get[T](param: Param[T]): Option[T]
    Definition Classes
    Params
  49. def getBatchSize: Int

    Size of every batch.

    Size of every batch.

    Definition Classes
    HasBatchedAnnotate
  50. def getCachePrompt: Boolean

  51. def getChatTemplate: String

    Definition Classes
    HasLlamaCppModelProperties
  52. final def getClass(): Class[_]
    Definition Classes
    AnyRef → Any
    Annotations
    @native()
  53. final def getDefault[T](param: Param[T]): Option[T]
    Definition Classes
    Params
  54. def getDefragmentationThreshold: Float

    Definition Classes
    HasLlamaCppModelProperties
  55. def getDisableTokenIds: Array[Int]

  56. def getDynamicTemperatureExponent: Float

  57. def getDynamicTemperatureRange: Float

  58. def getEmbedding: Boolean

    Definition Classes
    HasLlamaCppModelProperties
  59. def getEngine: String

    Definition Classes
    HasEngine
  60. def getFlashAttention: Boolean

    Definition Classes
    HasLlamaCppModelProperties
  61. def getFrequencyPenalty: Float

  62. def getGrammar: String

  63. def getGrpAttnN: Int
    Definition Classes
    HasLlamaCppModelProperties
  64. def getGrpAttnW: Int

    Definition Classes
    HasLlamaCppModelProperties
  65. def getIgnoreEos: Boolean

  66. def getInferenceParameters: InferenceParameters
    Attributes
    protected
    Definition Classes
    HasLlamaCppInferenceProperties
  67. def getInputCols: Array[String]

    returns

    input annotations columns currently used

    Definition Classes
    HasInputAnnotationCols
  68. def getInputPrefix: String

  69. def getInputPrefixBos: Boolean

    Definition Classes
    HasLlamaCppModelProperties
  70. def getInputSuffix: String

  71. def getLazyAnnotator: Boolean
    Definition Classes
    CanBeLazy
  72. def getLookupCacheDynamicFilePath: String

    Definition Classes
    HasLlamaCppModelProperties
  73. def getLookupCacheStaticFilePath: String

    Definition Classes
    HasLlamaCppModelProperties
  74. def getLoraAdapters: Map[String, Float]

    Definition Classes
    HasLlamaCppModelProperties
  75. def getMainGpu: Int

    Definition Classes
    HasLlamaCppModelProperties
  76. def getMetadata: String

    Get the metadata for the model

    Get the metadata for the model

    Definition Classes
    HasLlamaCppModelProperties
  77. def getMetadataMap: Map[String, String]
    Definition Classes
    HasLlamaCppModelProperties
  78. def getMinKeep: Int

  79. def getMinP: Float

  80. def getMiroStat: String

  81. def getMiroStatEta: Float

  82. def getMiroStatTau: Float

  83. def getModelDraft: String

    Definition Classes
    HasLlamaCppModelProperties
  84. def getModelIfNotSet: GGUFWrapper

  85. def getModelParameters: ModelParameters
    Attributes
    protected
    Definition Classes
    HasLlamaCppModelProperties
  86. def getNBatch: Int

    Definition Classes
    HasLlamaCppModelProperties
  87. def getNChunks: Int

    Definition Classes
    HasLlamaCppModelProperties
  88. def getNCtx: Int

    Definition Classes
    HasLlamaCppModelProperties
  89. def getNDraft: Int

    Definition Classes
    HasLlamaCppModelProperties
  90. def getNGpuLayers: Int

    Definition Classes
    HasLlamaCppModelProperties
  91. def getNGpuLayersDraft: Int

    Definition Classes
    HasLlamaCppModelProperties
  92. def getNKeep: Int

  93. def getNPredict: Int
  94. def getNProbs: Int

  95. def getNSequences: Int

    Definition Classes
    HasLlamaCppModelProperties
  96. def getNThreads: Int

    Definition Classes
    HasLlamaCppModelProperties
  97. def getNThreadsBatch: Int

    Definition Classes
    HasLlamaCppModelProperties
  98. def getNThreadsBatchDraft: Int

    Definition Classes
    HasLlamaCppModelProperties
  99. def getNThreadsDraft: Int

    Definition Classes
    HasLlamaCppModelProperties
  100. def getNUbatch: Int

    Definition Classes
    HasLlamaCppModelProperties
  101. def getNoKvOffload: Boolean

    Definition Classes
    HasLlamaCppModelProperties
  102. def getNuma: String

    Definition Classes
    HasLlamaCppModelProperties
  103. final def getOrDefault[T](param: Param[T]): T
    Definition Classes
    Params
  104. final def getOutputCol: String

    Gets annotation column name going to generate

    Gets annotation column name going to generate

    Definition Classes
    HasOutputAnnotationCol
  105. def getPSplit: Float

    Definition Classes
    HasLlamaCppModelProperties
  106. def getParam(paramName: String): Param[Any]
    Definition Classes
    Params
  107. def getPenalizeNl: Boolean

  108. def getPenaltyPrompt: String

  109. def getPoolingType: String

    Definition Classes
    HasLlamaCppModelProperties
  110. def getPresencePenalty: Float

  111. def getRepeatLastN: Int

  112. def getRepeatPenalty: Float

  113. def getRopeFreqBase: Float

    Definition Classes
    HasLlamaCppModelProperties
  114. def getRopeFreqScale: Float

    Definition Classes
    HasLlamaCppModelProperties
  115. def getRopeScalingType: String

    Definition Classes
    HasLlamaCppModelProperties
  116. def getSamplers: Array[String]

  117. def getSeed: Int

  118. def getSplitMode: String

    Definition Classes
    HasLlamaCppModelProperties
  119. def getStopStrings: Array[String]

  120. def getSystemPrompt: String

    Definition Classes
    HasLlamaCppModelProperties
  121. def getTemperature: Float

  122. def getTensorSplit: Array[Double]

    Definition Classes
    HasLlamaCppModelProperties
  123. def getTfsZ: Float

  124. def getTokenBias: Map[String, Float]

  125. def getTokenIdBias: Map[Int, Float]

  126. def getTopK: Int

  127. def getTopP: Float

  128. def getTypicalP: Float

  129. def getUseChatTemplate: Boolean

  130. def getUseMlock: Boolean

    Definition Classes
    HasLlamaCppModelProperties
  131. def getUseMmap: Boolean

    Definition Classes
    HasLlamaCppModelProperties
  132. def getYarnAttnFactor: Float

    Definition Classes
    HasLlamaCppModelProperties
  133. def getYarnBetaFast: Float

    Definition Classes
    HasLlamaCppModelProperties
  134. def getYarnBetaSlow: Float

    Definition Classes
    HasLlamaCppModelProperties
  135. def getYarnExtFactor: Float

    Definition Classes
    HasLlamaCppModelProperties
  136. def getYarnOrigCtx: Int

    Definition Classes
    HasLlamaCppModelProperties
  137. val gpuSplitMode: Param[String]

    Set how to split the model across GPUs

    Set how to split the model across GPUs

    • NONE: No GPU split
    • LAYER: Split the model across GPUs by layer
    • ROW: Split the model across GPUs by rows
    Definition Classes
    HasLlamaCppModelProperties
  138. val grammar: Param[String]

  139. val grpAttnN: IntParam

    Definition Classes
    HasLlamaCppModelProperties
  140. val grpAttnW: IntParam

    Definition Classes
    HasLlamaCppModelProperties
  141. final def hasDefault[T](param: Param[T]): Boolean
    Definition Classes
    Params
  142. def hasParam(paramName: String): Boolean
    Definition Classes
    Params
  143. def hasParent: Boolean
    Definition Classes
    Model
  144. def hashCode(): Int
    Definition Classes
    AnyRef → Any
    Annotations
    @native()
  145. val ignoreEos: BooleanParam

  146. def initializeLogIfNecessary(isInterpreter: Boolean, silent: Boolean): Boolean
    Attributes
    protected
    Definition Classes
    Logging
  147. def initializeLogIfNecessary(isInterpreter: Boolean): Unit
    Attributes
    protected
    Definition Classes
    Logging
  148. val inputAnnotatorTypes: Array[AnnotatorType]

    Annotator reference id.

    Annotator reference id. Used to identify elements in metadata or to refer to this annotator type

    Definition Classes
    AutoGGUFModelHasInputAnnotationCols
  149. final val inputCols: StringArrayParam

    columns that contain annotations necessary to run this annotator AnnotatorType is used both as input and output columns if not specified

    columns that contain annotations necessary to run this annotator AnnotatorType is used both as input and output columns if not specified

    Attributes
    protected
    Definition Classes
    HasInputAnnotationCols
  150. val inputPrefix: Param[String]

  151. val inputPrefixBos: BooleanParam

    Definition Classes
    HasLlamaCppModelProperties
  152. val inputSuffix: Param[String]

  153. final def isDefined(param: Param[_]): Boolean
    Definition Classes
    Params
  154. final def isInstanceOf[T0]: Boolean
    Definition Classes
    Any
  155. final def isSet(param: Param[_]): Boolean
    Definition Classes
    Params
  156. def isTraceEnabled(): Boolean
    Attributes
    protected
    Definition Classes
    Logging
  157. val lazyAnnotator: BooleanParam
    Definition Classes
    CanBeLazy
  158. def log: Logger
    Attributes
    protected
    Definition Classes
    Logging
  159. def logDebug(msg: ⇒ String, throwable: Throwable): Unit
    Attributes
    protected
    Definition Classes
    Logging
  160. def logDebug(msg: ⇒ String): Unit
    Attributes
    protected
    Definition Classes
    Logging
  161. def logError(msg: ⇒ String, throwable: Throwable): Unit
    Attributes
    protected
    Definition Classes
    Logging
  162. def logError(msg: ⇒ String): Unit
    Attributes
    protected
    Definition Classes
    Logging
  163. def logInfo(msg: ⇒ String, throwable: Throwable): Unit
    Attributes
    protected
    Definition Classes
    Logging
  164. def logInfo(msg: ⇒ String): Unit
    Attributes
    protected
    Definition Classes
    Logging
  165. def logName: String
    Attributes
    protected
    Definition Classes
    Logging
  166. def logTrace(msg: ⇒ String, throwable: Throwable): Unit
    Attributes
    protected
    Definition Classes
    Logging
  167. def logTrace(msg: ⇒ String): Unit
    Attributes
    protected
    Definition Classes
    Logging
  168. def logWarning(msg: ⇒ String, throwable: Throwable): Unit
    Attributes
    protected
    Definition Classes
    Logging
  169. def logWarning(msg: ⇒ String): Unit
    Attributes
    protected
    Definition Classes
    Logging
  170. val logger: Logger
    Attributes
    protected
    Definition Classes
    HasLlamaCppModelProperties
  171. val lookupCacheDynamicFilePath: Param[String]

    Definition Classes
    HasLlamaCppModelProperties
  172. val lookupCacheStaticFilePath: Param[String]

    Definition Classes
    HasLlamaCppModelProperties
  173. val loraAdapters: StructFeature[Map[String, Float]]

    Definition Classes
    HasLlamaCppModelProperties
  174. val mainGpu: IntParam

    Definition Classes
    HasLlamaCppModelProperties
  175. val metadata: ProtectedParam[String]
    Definition Classes
    HasLlamaCppModelProperties
  176. val minKeep: IntParam

  177. val minP: FloatParam

  178. val miroStat: Param[String]

  179. val miroStatEta: FloatParam

  180. val miroStatTau: FloatParam

  181. val modelDraft: Param[String]

    Definition Classes
    HasLlamaCppModelProperties
  182. def msgHelper(schema: StructType): String
    Attributes
    protected
    Definition Classes
    HasInputAnnotationCols
  183. val nBatch: IntParam

    Definition Classes
    HasLlamaCppModelProperties
  184. val nChunks: IntParam

    Definition Classes
    HasLlamaCppModelProperties
  185. val nCtx: IntParam

    Definition Classes
    HasLlamaCppModelProperties
  186. val nDraft: IntParam

    Definition Classes
    HasLlamaCppModelProperties
  187. val nGpuLayers: IntParam

    Definition Classes
    HasLlamaCppModelProperties
  188. val nGpuLayersDraft: IntParam

    Definition Classes
    HasLlamaCppModelProperties
  189. val nKeep: IntParam

  190. val nPredict: IntParam

  191. val nProbs: IntParam

  192. val nSequences: IntParam

    Definition Classes
    HasLlamaCppModelProperties
  193. val nThreads: IntParam

    Definition Classes
    HasLlamaCppModelProperties
  194. val nThreadsBatch: IntParam

    Definition Classes
    HasLlamaCppModelProperties
  195. val nThreadsBatchDraft: IntParam

    Definition Classes
    HasLlamaCppModelProperties
  196. val nThreadsDraft: IntParam

    Definition Classes
    HasLlamaCppModelProperties
  197. val nUbatch: IntParam

    Definition Classes
    HasLlamaCppModelProperties
  198. final def ne(arg0: AnyRef): Boolean
    Definition Classes
    AnyRef
  199. val noKvOffload: BooleanParam

    Definition Classes
    HasLlamaCppModelProperties
  200. final def notify(): Unit
    Definition Classes
    AnyRef
    Annotations
    @native()
  201. final def notifyAll(): Unit
    Definition Classes
    AnyRef
    Annotations
    @native()
  202. val numaStrategy: Param[String]

    Set optimization strategies that help on some NUMA systems (if available)

    Set optimization strategies that help on some NUMA systems (if available)

    Available Strategies:

    • DISABLED: No NUMA optimizations
    • DISTRIBUTE: Spread execution evenly over all
    • ISOLATE: Only spawn threads on CPUs on the node that execution started on
    • NUMA_CTL: Use the CPU map provided by numactl
    • MIRROR: Mirrors the model across NUMA nodes
    Definition Classes
    HasLlamaCppModelProperties
  203. def onWrite(path: String, spark: SparkSession): Unit
  204. val optionalInputAnnotatorTypes: Array[String]
    Definition Classes
    HasInputAnnotationCols
  205. val outputAnnotatorType: AnnotatorType
    Definition Classes
    AutoGGUFModelHasOutputAnnotatorType
  206. final val outputCol: Param[String]
    Attributes
    protected
    Definition Classes
    HasOutputAnnotationCol
  207. val pSplit: FloatParam

    Definition Classes
    HasLlamaCppModelProperties
  208. lazy val params: Array[Param[_]]
    Definition Classes
    Params
  209. var parent: Estimator[AutoGGUFModel]
    Definition Classes
    Model
  210. val penalizeNl: BooleanParam

  211. val penaltyPrompt: Param[String]

  212. val poolingType: Param[String]

    Set the pooling type for embeddings, use model default if unspecified

    Set the pooling type for embeddings, use model default if unspecified

    • 0 NONE: Don't use any pooling
    • 1 MEAN: Mean Pooling
    • 2 CLS: Choose the CLS token
    • 3 LAST: Choose the last token
    Definition Classes
    HasLlamaCppModelProperties
  213. val presencePenalty: FloatParam

  214. val repeatLastN: IntParam

  215. val repeatPenalty: FloatParam

  216. val ropeFreqBase: FloatParam

    Definition Classes
    HasLlamaCppModelProperties
  217. val ropeFreqScale: FloatParam

    Definition Classes
    HasLlamaCppModelProperties
  218. val ropeScalingType: Param[String]

    Set the RoPE frequency scaling method, defaults to linear unless specified by the model.

    Set the RoPE frequency scaling method, defaults to linear unless specified by the model.

    • UNSPECIFIED: Don't use any scaling
    • LINEAR: Linear scaling
    • YARN: YaRN RoPE scaling
    Definition Classes
    HasLlamaCppModelProperties
  219. val samplers: StringArrayParam

  220. def save(path: String): Unit
    Definition Classes
    MLWritable
    Annotations
    @Since( "1.6.0" ) @throws( ... )
  221. val seed: IntParam

  222. def set[T](param: ProtectedParam[T], value: T): AutoGGUFModel.this.type

    Sets the value for a protected Param.

    Sets the value for a protected Param.

    If the parameter was already set, it will not be set again. Default values do not count as a set value and can be overridden.

    T

    Type of the parameter

    param

    Protected parameter to set

    value

    Value for the parameter

    returns

    This object

    Definition Classes
    HasProtectedParams
  223. def set[T](feature: StructFeature[T], value: T): AutoGGUFModel.this.type
    Attributes
    protected
    Definition Classes
    HasFeatures
  224. def set[K, V](feature: MapFeature[K, V], value: Map[K, V]): AutoGGUFModel.this.type
    Attributes
    protected
    Definition Classes
    HasFeatures
  225. def set[T](feature: SetFeature[T], value: Set[T]): AutoGGUFModel.this.type
    Attributes
    protected
    Definition Classes
    HasFeatures
  226. def set[T](feature: ArrayFeature[T], value: Array[T]): AutoGGUFModel.this.type
    Attributes
    protected
    Definition Classes
    HasFeatures
  227. final def set(paramPair: ParamPair[_]): AutoGGUFModel.this.type
    Attributes
    protected
    Definition Classes
    Params
  228. final def set(param: String, value: Any): AutoGGUFModel.this.type
    Attributes
    protected
    Definition Classes
    Params
  229. final def set[T](param: Param[T], value: T): AutoGGUFModel.this.type
    Definition Classes
    Params
  230. def setBatchSize(size: Int): AutoGGUFModel.this.type

    Size of every batch.

    Size of every batch.

    Definition Classes
    HasBatchedAnnotate
  231. def setCachePrompt(cachePrompt: Boolean): AutoGGUFModel.this.type

    Whether to remember the prompt to avoid reprocessing it

    Whether to remember the prompt to avoid reprocessing it

    Definition Classes
    HasLlamaCppInferenceProperties
  232. def setChatTemplate(chatTemplate: String): AutoGGUFModel.this.type

    The chat template to use

    The chat template to use

    Definition Classes
    HasLlamaCppModelProperties
  233. def setDefault[T](feature: StructFeature[T], value: () ⇒ T): AutoGGUFModel.this.type
    Attributes
    protected
    Definition Classes
    HasFeatures
  234. def setDefault[K, V](feature: MapFeature[K, V], value: () ⇒ Map[K, V]): AutoGGUFModel.this.type
    Attributes
    protected
    Definition Classes
    HasFeatures
  235. def setDefault[T](feature: SetFeature[T], value: () ⇒ Set[T]): AutoGGUFModel.this.type
    Attributes
    protected
    Definition Classes
    HasFeatures
  236. def setDefault[T](feature: ArrayFeature[T], value: () ⇒ Array[T]): AutoGGUFModel.this.type
    Attributes
    protected
    Definition Classes
    HasFeatures
  237. final def setDefault(paramPairs: ParamPair[_]*): AutoGGUFModel.this.type
    Attributes
    protected
    Definition Classes
    Params
  238. final def setDefault[T](param: Param[T], value: T): AutoGGUFModel.this.type
    Attributes
    protected[org.apache.spark.ml]
    Definition Classes
    Params
  239. def setDefragmentationThreshold(defragThold: Float): AutoGGUFModel.this.type

    Set the KV cache defragmentation threshold

    Set the KV cache defragmentation threshold

    Definition Classes
    HasLlamaCppModelProperties
  240. def setDisableTokenIds(disableTokenIds: Array[Int]): AutoGGUFModel.this.type

    Set the token ids to disable in the completion.

    Set the token ids to disable in the completion. This corresponds to setTokenBias with a value of Float.NEGATIVE_INFINITY.

    Definition Classes
    HasLlamaCppInferenceProperties
  241. def setDynamicTemperatureExponent(dynatempExponent: Float): AutoGGUFModel.this.type

    Set the dynamic temperature exponent

    Set the dynamic temperature exponent

    Definition Classes
    HasLlamaCppInferenceProperties
  242. def setDynamicTemperatureRange(dynatempRange: Float): AutoGGUFModel.this.type

    Set the dynamic temperature range

    Set the dynamic temperature range

    Definition Classes
    HasLlamaCppInferenceProperties
  243. def setEmbedding(embedding: Boolean): AutoGGUFModel.this.type

    Whether to load model with embedding support

    Whether to load model with embedding support

    Definition Classes
    HasLlamaCppModelProperties
  244. def setFlashAttention(flashAttention: Boolean): AutoGGUFModel.this.type

    Whether to enable Flash Attention

    Whether to enable Flash Attention

    Definition Classes
    HasLlamaCppModelProperties
  245. def setFrequencyPenalty(frequencyPenalty: Float): AutoGGUFModel.this.type

    Set the repetition alpha frequency penalty

    Set the repetition alpha frequency penalty

    Definition Classes
    HasLlamaCppInferenceProperties
  246. def setGpuSplitMode(splitMode: String): AutoGGUFModel.this.type

    Set how to split the model across GPUs

    Set how to split the model across GPUs

    • NONE: No GPU split -LAYER: Split the model across GPUs by layer 2. ROW: Split the model across GPUs by rows
    Definition Classes
    HasLlamaCppModelProperties
  247. def setGpuSupportIfAvailable(spark: SparkSession): AutoGGUFModel.this.type
    Attributes
    protected
    Definition Classes
    HasLlamaCppModelProperties
  248. def setGrammar(grammar: String): AutoGGUFModel.this.type

    Set BNF-like grammar to constrain generations

    Set BNF-like grammar to constrain generations

    Definition Classes
    HasLlamaCppInferenceProperties
  249. def setGrpAttnN(grpAttnN: Int): AutoGGUFModel.this.type

    Set the group-attention factor

    Set the group-attention factor

    Definition Classes
    HasLlamaCppModelProperties
  250. def setGrpAttnW(grpAttnW: Int): AutoGGUFModel.this.type

    Set the group-attention width

    Set the group-attention width

    Definition Classes
    HasLlamaCppModelProperties
  251. def setIgnoreEos(ignoreEos: Boolean): AutoGGUFModel.this.type

    Set whether to ignore end of stream token and continue generating (implies --logit-bias 2-inf)

    Set whether to ignore end of stream token and continue generating (implies --logit-bias 2-inf)

    Definition Classes
    HasLlamaCppInferenceProperties
  252. final def setInputCols(value: String*): AutoGGUFModel.this.type
    Definition Classes
    HasInputAnnotationCols
  253. def setInputCols(value: Array[String]): AutoGGUFModel.this.type

    Overrides required annotators column if different than default

    Overrides required annotators column if different than default

    Definition Classes
    HasInputAnnotationCols
  254. def setInputPrefix(inputPrefix: String): AutoGGUFModel.this.type

    Set the prompt to start generation with

    Set the prompt to start generation with

    Definition Classes
    HasLlamaCppInferenceProperties
  255. def setInputPrefixBos(inputPrefixBos: Boolean): AutoGGUFModel.this.type

    Whether to add prefix BOS to user inputs, preceding the --in-prefix string

    Whether to add prefix BOS to user inputs, preceding the --in-prefix string

    Definition Classes
    HasLlamaCppModelProperties
  256. def setInputSuffix(inputSuffix: String): AutoGGUFModel.this.type

    Set a suffix for infilling

    Set a suffix for infilling

    Definition Classes
    HasLlamaCppInferenceProperties
  257. def setLazyAnnotator(value: Boolean): AutoGGUFModel.this.type
    Definition Classes
    CanBeLazy
  258. def setLookupCacheDynamicFilePath(lookupCacheDynamicFilePath: String): AutoGGUFModel.this.type

    Set path to dynamic lookup cache to use for lookup decoding (updated by generation)

    Set path to dynamic lookup cache to use for lookup decoding (updated by generation)

    Definition Classes
    HasLlamaCppModelProperties
  259. def setLookupCacheStaticFilePath(lookupCacheStaticFilePath: String): AutoGGUFModel.this.type

    Set path to static lookup cache to use for lookup decoding (not updated by generation)

    Set path to static lookup cache to use for lookup decoding (not updated by generation)

    Definition Classes
    HasLlamaCppModelProperties
  260. def setLoraAdapters(loraAdapters: HashMap[String, Double]): AutoGGUFModel.this.type

    Sets paths to lora adapters with user defined scale.

    Sets paths to lora adapters with user defined scale. (PySpark Override)

    Definition Classes
    HasLlamaCppModelProperties
  261. def setLoraAdapters(loraAdapters: Map[String, Float]): AutoGGUFModel.this.type

    Sets paths to lora adapters with user defined scale.

    Sets paths to lora adapters with user defined scale.

    Definition Classes
    HasLlamaCppModelProperties
  262. def setMainGpu(mainGpu: Int): AutoGGUFModel.this.type

    Set the GPU that is used for scratch and small tensors

    Set the GPU that is used for scratch and small tensors

    Definition Classes
    HasLlamaCppModelProperties
  263. def setMetadata(metadata: String): AutoGGUFModel.this.type

    Set the metadata for the model

    Set the metadata for the model

    Definition Classes
    HasLlamaCppModelProperties
  264. def setMinKeep(minKeep: Int): AutoGGUFModel.this.type

    Set the amount of tokens the samplers should return at least (0 = disabled)

    Set the amount of tokens the samplers should return at least (0 = disabled)

    Definition Classes
    HasLlamaCppInferenceProperties
  265. def setMinP(minP: Float): AutoGGUFModel.this.type

    Set min-p sampling

    Set min-p sampling

    Definition Classes
    HasLlamaCppInferenceProperties
  266. def setMiroStat(mirostat: String): AutoGGUFModel.this.type

    Set MiroStat sampling strategies.

    Set MiroStat sampling strategies.

    • DISABLED: No MiroStat
    • V1: MiroStat V1
    • V2: MiroStat V2
    Definition Classes
    HasLlamaCppInferenceProperties
  267. def setMiroStatEta(mirostatEta: Float): AutoGGUFModel.this.type

    Set the MiroStat learning rate, parameter eta

    Set the MiroStat learning rate, parameter eta

    Definition Classes
    HasLlamaCppInferenceProperties
  268. def setMiroStatTau(mirostatTau: Float): AutoGGUFModel.this.type

    Set the MiroStat target entropy, parameter tau

    Set the MiroStat target entropy, parameter tau

    Definition Classes
    HasLlamaCppInferenceProperties
  269. def setModelDraft(modelDraft: String): AutoGGUFModel.this.type

    Set the draft model for speculative decoding

    Set the draft model for speculative decoding

    Definition Classes
    HasLlamaCppModelProperties
  270. def setModelIfNotSet(spark: SparkSession, wrapper: GGUFWrapper): AutoGGUFModel.this.type

  271. def setNBatch(nBatch: Int): AutoGGUFModel.this.type

    Set the logical batch size for prompt processing (must be >=32 to use BLAS)

    Set the logical batch size for prompt processing (must be >=32 to use BLAS)

    Definition Classes
    HasLlamaCppModelProperties
  272. def setNChunks(nChunks: Int): AutoGGUFModel.this.type

    Set the maximal number of chunks to process

    Set the maximal number of chunks to process

    Definition Classes
    HasLlamaCppModelProperties
  273. def setNCtx(nCtx: Int): AutoGGUFModel.this.type

    Set the size of the prompt context

    Set the size of the prompt context

    Definition Classes
    HasLlamaCppModelProperties
  274. def setNDraft(nDraft: Int): AutoGGUFModel.this.type

    Set the number of tokens to draft for speculative decoding

    Set the number of tokens to draft for speculative decoding

    Definition Classes
    HasLlamaCppModelProperties
  275. def setNGpuLayers(nGpuLayers: Int): AutoGGUFModel.this.type

    Set the number of layers to store in VRAM (-1 - use default)

    Set the number of layers to store in VRAM (-1 - use default)

    Definition Classes
    HasLlamaCppModelProperties
  276. def setNGpuLayersDraft(nGpuLayersDraft: Int): AutoGGUFModel.this.type

    Set the number of layers to store in VRAM for the draft model (-1 - use default)

    Set the number of layers to store in VRAM for the draft model (-1 - use default)

    Definition Classes
    HasLlamaCppModelProperties
  277. def setNKeep(nKeep: Int): AutoGGUFModel.this.type

    Set the number of tokens to keep from the initial prompt

    Set the number of tokens to keep from the initial prompt

    Definition Classes
    HasLlamaCppInferenceProperties
  278. def setNPredict(nPredict: Int): AutoGGUFModel.this.type

    Set the number of tokens to predict

    Set the number of tokens to predict

    Definition Classes
    HasLlamaCppInferenceProperties
  279. def setNProbs(nProbs: Int): AutoGGUFModel.this.type

    Set the amount top tokens probabilities to output if greater than 0.

    Set the amount top tokens probabilities to output if greater than 0.

    Definition Classes
    HasLlamaCppInferenceProperties
  280. def setNSequences(nSequences: Int): AutoGGUFModel.this.type

    Set the number of sequences to decode

    Set the number of sequences to decode

    Definition Classes
    HasLlamaCppModelProperties
  281. def setNThreads(nThreads: Int): AutoGGUFModel.this.type

    Set the number of threads to use during generation

    Set the number of threads to use during generation

    Definition Classes
    HasLlamaCppModelProperties
  282. def setNThreadsBatch(nThreadsBatch: Int): AutoGGUFModel.this.type

    Set the number of threads to use during batch and prompt processing

    Set the number of threads to use during batch and prompt processing

    Definition Classes
    HasLlamaCppModelProperties
  283. def setNThreadsBatchDraft(nThreadsBatchDraft: Int): AutoGGUFModel.this.type

    Set the number of threads to use during batch and prompt processing

    Set the number of threads to use during batch and prompt processing

    Definition Classes
    HasLlamaCppModelProperties
  284. def setNThreadsDraft(nThreadsDraft: Int): AutoGGUFModel.this.type

    Set the number of threads to use during draft generation

    Set the number of threads to use during draft generation

    Definition Classes
    HasLlamaCppModelProperties
  285. def setNUbatch(nUbatch: Int): AutoGGUFModel.this.type

    Set the physical batch size for prompt processing (must be >=32 to use BLAS)

    Set the physical batch size for prompt processing (must be >=32 to use BLAS)

    Definition Classes
    HasLlamaCppModelProperties
  286. def setNoKvOffload(noKvOffload: Boolean): AutoGGUFModel.this.type

    Whether to disable KV offload

    Whether to disable KV offload

    Definition Classes
    HasLlamaCppModelProperties
  287. def setNumaStrategy(numa: String): AutoGGUFModel.this.type

    Set optimization strategies that help on some NUMA systems (if available)

    Set optimization strategies that help on some NUMA systems (if available)

    Available Strategies:

    • DISABLED: No NUMA optimizations
    • DISTRIBUTE: spread execution evenly over all
    • ISOLATE: only spawn threads on CPUs on the node that execution started on
    • NUMA_CTL: use the CPU map provided by numactl
    • MIRROR: Mirrors the model across NUMA nodes
    Definition Classes
    HasLlamaCppModelProperties
  288. final def setOutputCol(value: String): AutoGGUFModel.this.type

    Overrides annotation column name when transforming

    Overrides annotation column name when transforming

    Definition Classes
    HasOutputAnnotationCol
  289. def setPSplit(pSplit: Float): AutoGGUFModel.this.type

    Set the speculative decoding split probability

    Set the speculative decoding split probability

    Definition Classes
    HasLlamaCppModelProperties
  290. def setParent(parent: Estimator[AutoGGUFModel]): AutoGGUFModel
    Definition Classes
    Model
  291. def setPenalizeNl(penalizeNl: Boolean): AutoGGUFModel.this.type

    Set whether to penalize newline tokens

    Set whether to penalize newline tokens

    Definition Classes
    HasLlamaCppInferenceProperties
  292. def setPenaltyPrompt(penaltyPrompt: String): AutoGGUFModel.this.type

    Override which part of the prompt is penalized for repetition.

    Override which part of the prompt is penalized for repetition.

    Definition Classes
    HasLlamaCppInferenceProperties
  293. def setPoolingType(poolingType: String): AutoGGUFModel.this.type

    Set the pooling type for embeddings, use model default if unspecified

    Set the pooling type for embeddings, use model default if unspecified

    • 0 NONE: Don't use any pooling and return token embeddings (if the model supports it)
    • 1 MEAN: Mean Pooling
    • 2 CLS: Choose the CLS token
    • 3 LAST: Choose the last token
    Definition Classes
    HasLlamaCppModelProperties
  294. def setPresencePenalty(presencePenalty: Float): AutoGGUFModel.this.type

    Set the repetition alpha presence penalty

    Set the repetition alpha presence penalty

    Definition Classes
    HasLlamaCppInferenceProperties
  295. def setRepeatLastN(repeatLastN: Int): AutoGGUFModel.this.type

    Set the last n tokens to consider for penalties

    Set the last n tokens to consider for penalties

    Definition Classes
    HasLlamaCppInferenceProperties
  296. def setRepeatPenalty(repeatPenalty: Float): AutoGGUFModel.this.type

    Set the penalty of repeated sequences of tokens

    Set the penalty of repeated sequences of tokens

    Definition Classes
    HasLlamaCppInferenceProperties
  297. def setRopeFreqBase(ropeFreqBase: Float): AutoGGUFModel.this.type

    Set the RoPE base frequency, used by NTK-aware scaling

    Set the RoPE base frequency, used by NTK-aware scaling

    Definition Classes
    HasLlamaCppModelProperties
  298. def setRopeFreqScale(ropeFreqScale: Float): AutoGGUFModel.this.type

    Set the RoPE frequency scaling factor, expands context by a factor of 1/N

    Set the RoPE frequency scaling factor, expands context by a factor of 1/N

    Definition Classes
    HasLlamaCppModelProperties
  299. def setRopeScalingType(ropeScalingType: String): AutoGGUFModel.this.type

    Set the RoPE frequency scaling method, defaults to linear unless specified by the model.

    Set the RoPE frequency scaling method, defaults to linear unless specified by the model.

    • UNSPECIFIED: Don't use any scaling
    • LINEAR: Linear scaling
    • YARN: YaRN RoPE scaling
    Definition Classes
    HasLlamaCppModelProperties
  300. def setSamplers(samplers: Array[String]): AutoGGUFModel.this.type

    Set which samplers to use for token generation in the given order .

    Set which samplers to use for token generation in the given order .

    Available Samplers are:

    • TOP_K: Top-k sampling
    • TFS_Z: Tail free sampling
    • TYPICAL_P: Locally typical sampling p
    • TOP_P: Top-p sampling
    • MIN_P: Min-p sampling
    • TEMPERATURE: Temperature sampling
    Definition Classes
    HasLlamaCppInferenceProperties
  301. def setSeed(seed: Int): AutoGGUFModel.this.type

    Set the RNG seed

    Set the RNG seed

    Definition Classes
    HasLlamaCppInferenceProperties
  302. def setStopStrings(stopStrings: Array[String]): AutoGGUFModel.this.type

    Set strings upon seeing which token generation is stopped

    Set strings upon seeing which token generation is stopped

    Definition Classes
    HasLlamaCppInferenceProperties
  303. def setSystemPrompt(systemPrompt: String): AutoGGUFModel.this.type

    Set a system prompt to use

    Set a system prompt to use

    Definition Classes
    HasLlamaCppModelProperties
  304. def setTemperature(temperature: Float): AutoGGUFModel.this.type

    Set the temperature

    Set the temperature

    Definition Classes
    HasLlamaCppInferenceProperties
  305. def setTensorSplit(tensorSplit: Array[Double]): AutoGGUFModel.this.type

    Set how split tensors should be distributed across GPUs

    Set how split tensors should be distributed across GPUs

    Definition Classes
    HasLlamaCppModelProperties
  306. def setTfsZ(tfsZ: Float): AutoGGUFModel.this.type

    Set tail free sampling, parameter z

    Set tail free sampling, parameter z

    Definition Classes
    HasLlamaCppInferenceProperties
  307. def setTokenBias(tokenBias: HashMap[String, Double]): AutoGGUFModel.this.type

    Set the tokens to disable during completion.

    Set the tokens to disable during completion. (Override for PySpark)

    Definition Classes
    HasLlamaCppInferenceProperties
  308. def setTokenBias(tokenBias: Map[String, Float]): AutoGGUFModel.this.type

    Set the tokens to disable during completion.

    Set the tokens to disable during completion.

    Definition Classes
    HasLlamaCppInferenceProperties
  309. def setTokenIdBias(tokenIdBias: HashMap[Integer, Double]): AutoGGUFModel.this.type

    Set the token ids to disable in the completion.

    Set the token ids to disable in the completion. (Override for PySpark)

    Definition Classes
    HasLlamaCppInferenceProperties
  310. def setTokenIdBias(tokenIdBias: Map[Int, Float]): AutoGGUFModel.this.type

    Set the token ids to disable in the completion.

    Set the token ids to disable in the completion.

    Definition Classes
    HasLlamaCppInferenceProperties
  311. def setTopK(topK: Int): AutoGGUFModel.this.type

    Set top-k sampling

    Set top-k sampling

    Definition Classes
    HasLlamaCppInferenceProperties
  312. def setTopP(topP: Float): AutoGGUFModel.this.type

    Set top-p sampling

    Set top-p sampling

    Definition Classes
    HasLlamaCppInferenceProperties
  313. def setTypicalP(typicalP: Float): AutoGGUFModel.this.type

    Set locally typical sampling, parameter p

    Set locally typical sampling, parameter p

    Definition Classes
    HasLlamaCppInferenceProperties
  314. def setUseChatTemplate(useChatTemplate: Boolean): AutoGGUFModel.this.type

    Set whether or not generate should apply a chat template

    Set whether or not generate should apply a chat template

    Definition Classes
    HasLlamaCppInferenceProperties
  315. def setUseMlock(useMlock: Boolean): AutoGGUFModel.this.type

    Whether to force the system to keep model in RAM rather than swapping or compressing

    Whether to force the system to keep model in RAM rather than swapping or compressing

    Definition Classes
    HasLlamaCppModelProperties
  316. def setUseMmap(useMmap: Boolean): AutoGGUFModel.this.type

    Whether to use memory-map model (faster load but may increase pageouts if not using mlock)

    Whether to use memory-map model (faster load but may increase pageouts if not using mlock)

    Definition Classes
    HasLlamaCppModelProperties
  317. def setYarnAttnFactor(yarnAttnFactor: Float): AutoGGUFModel.this.type

    Set the YaRN scale sqrt(t) or attention magnitude

    Set the YaRN scale sqrt(t) or attention magnitude

    Definition Classes
    HasLlamaCppModelProperties
  318. def setYarnBetaFast(yarnBetaFast: Float): AutoGGUFModel.this.type

    Set the YaRN low correction dim or beta

    Set the YaRN low correction dim or beta

    Definition Classes
    HasLlamaCppModelProperties
  319. def setYarnBetaSlow(yarnBetaSlow: Float): AutoGGUFModel.this.type

    Set the YaRN high correction dim or alpha

    Set the YaRN high correction dim or alpha

    Definition Classes
    HasLlamaCppModelProperties
  320. def setYarnExtFactor(yarnExtFactor: Float): AutoGGUFModel.this.type

    Set the YaRN extrapolation mix factor

    Set the YaRN extrapolation mix factor

    Definition Classes
    HasLlamaCppModelProperties
  321. def setYarnOrigCtx(yarnOrigCtx: Int): AutoGGUFModel.this.type

    Set the YaRN original context size of model

    Set the YaRN original context size of model

    Definition Classes
    HasLlamaCppModelProperties
  322. val stopStrings: StringArrayParam

  323. final def synchronized[T0](arg0: ⇒ T0): T0
    Definition Classes
    AnyRef
  324. val systemPrompt: Param[String]

    Definition Classes
    HasLlamaCppModelProperties
  325. val temperature: FloatParam

  326. val tensorSplit: DoubleArrayParam

    Definition Classes
    HasLlamaCppModelProperties
  327. val tfsZ: FloatParam

  328. def toString(): String
    Definition Classes
    Identifiable → AnyRef → Any
  329. val tokenBias: StructFeature[Map[String, Float]]

  330. val tokenIdBias: StructFeature[Map[Int, Float]]
  331. val topK: IntParam

  332. val topP: FloatParam

  333. final def transform(dataset: Dataset[_]): DataFrame

    Given requirements are met, this applies ML transformation within a Pipeline or stand-alone Output annotation will be generated as a new column, previous annotations are still available separately metadata is built at schema level to record annotations structural information outside its content

    Given requirements are met, this applies ML transformation within a Pipeline or stand-alone Output annotation will be generated as a new column, previous annotations are still available separately metadata is built at schema level to record annotations structural information outside its content

    dataset

    Dataset[Row]

    Definition Classes
    AnnotatorModel → Transformer
  334. def transform(dataset: Dataset[_], paramMap: ParamMap): DataFrame
    Definition Classes
    Transformer
    Annotations
    @Since( "2.0.0" )
  335. def transform(dataset: Dataset[_], firstParamPair: ParamPair[_], otherParamPairs: ParamPair[_]*): DataFrame
    Definition Classes
    Transformer
    Annotations
    @Since( "2.0.0" ) @varargs()
  336. final def transformSchema(schema: StructType): StructType

    requirement for pipeline transformation validation.

    requirement for pipeline transformation validation. It is called on fit()

    Definition Classes
    RawAnnotator → PipelineStage
  337. def transformSchema(schema: StructType, logging: Boolean): StructType
    Attributes
    protected
    Definition Classes
    PipelineStage
    Annotations
    @DeveloperApi()
  338. val typicalP: FloatParam

  339. val uid: String
    Definition Classes
    AutoGGUFModel → Identifiable
  340. val useChatTemplate: BooleanParam

  341. val useMlock: BooleanParam

    Definition Classes
    HasLlamaCppModelProperties
  342. val useMmap: BooleanParam

    Definition Classes
    HasLlamaCppModelProperties
  343. def validate(schema: StructType): Boolean

    takes a Dataset and checks to see if all the required annotation types are present.

    takes a Dataset and checks to see if all the required annotation types are present.

    schema

    to be validated

    returns

    True if all the required types are present, else false

    Attributes
    protected
    Definition Classes
    RawAnnotator
  344. final def wait(): Unit
    Definition Classes
    AnyRef
    Annotations
    @throws( ... )
  345. final def wait(arg0: Long, arg1: Int): Unit
    Definition Classes
    AnyRef
    Annotations
    @throws( ... )
  346. final def wait(arg0: Long): Unit
    Definition Classes
    AnyRef
    Annotations
    @throws( ... ) @native()
  347. def wrapColumnMetadata(col: Column): Column
    Attributes
    protected
    Definition Classes
    RawAnnotator
  348. def write: MLWriter
    Definition Classes
    ParamsAndFeaturesWritable → DefaultParamsWritable → MLWritable
  349. val yarnAttnFactor: FloatParam

    Definition Classes
    HasLlamaCppModelProperties
  350. val yarnBetaFast: FloatParam

    Definition Classes
    HasLlamaCppModelProperties
  351. val yarnBetaSlow: FloatParam

    Definition Classes
    HasLlamaCppModelProperties
  352. val yarnExtFactor: FloatParam

    Definition Classes
    HasLlamaCppModelProperties
  353. val yarnOrigCtx: IntParam

    Definition Classes
    HasLlamaCppModelProperties

Inherited from HasProtectedParams

Inherited from HasEngine

Inherited from AnnotatorModel[AutoGGUFModel]

Inherited from CanBeLazy

Inherited from RawAnnotator[AutoGGUFModel]

Inherited from HasOutputAnnotationCol

Inherited from HasInputAnnotationCols

Inherited from HasOutputAnnotatorType

Inherited from ParamsAndFeaturesWritable

Inherited from HasFeatures

Inherited from DefaultParamsWritable

Inherited from MLWritable

Inherited from Model[AutoGGUFModel]

Inherited from Transformer

Inherited from PipelineStage

Inherited from Logging

Inherited from Params

Inherited from Serializable

Inherited from Serializable

Inherited from Identifiable

Inherited from AnyRef

Inherited from Any

Parameters

A list of (hyper-)parameter keys this annotator can take. Users can set and get the parameter values through setters and getters, respectively.

Members

Parameter setters

Parameter getters