sparknlp.annotator.seq2seq.auto_gguf_model#

Contains classes for the AutoGGUFModel.

Module Contents#

Classes#

AutoGGUFModel

Annotator that uses the llama.cpp library to generate text completions with large language

class AutoGGUFModel(classname='com.johnsnowlabs.nlp.annotators.seq2seq.AutoGGUFModel', java_model=None)[source]#

Annotator that uses the llama.cpp library to generate text completions with large language models.

For settable parameters, and their explanations, see the parameters of this class and refer to the llama.cpp documentation of server.cpp for more information.

If the parameters are not set, the annotator will default to use the parameters provided by the model.

Pretrained models can be loaded with pretrained() of the companion object:

>>> auto_gguf_model = AutoGGUFModel.pretrained() \
...     .setInputCols(["document"]) \
...     .setOutputCol("completions")

The default model is "phi3.5_mini_4k_instruct_q4_gguf", if no name is provided.

For extended examples of usage, see the AutoGGUFModelTest and the example notebook.

For available pretrained models please see the Models Hub.

Input Annotation types

Output Annotation type

DOCUMENT

DOCUMENT

Parameters:
nThreads

Set the number of threads to use during generation

nThreadsDraft

Set the number of threads to use during draft generation

nThreadsBatch

Set the number of threads to use during batch and prompt processing

nThreadsBatchDraft

Set the number of threads to use during batch and prompt processing

nCtx

Set the size of the prompt context

nBatch

Set the logical batch size for prompt processing (must be >=32 to use BLAS)

nUbatch

Set the physical batch size for prompt processing (must be >=32 to use BLAS)

nDraft

Set the number of tokens to draft for speculative decoding

nChunks

Set the maximal number of chunks to process

nSequences

Set the number of sequences to decode

pSplit

Set the speculative decoding split probability

nGpuLayers

Set the number of layers to store in VRAM (-1 - use default)

nGpuLayersDraft

Set the number of layers to store in VRAM for the draft model (-1 - use default)

gpuSplitMode

Set how to split the model across GPUs

mainGpu

Set the main GPU that is used for scratch and small tensors.

tensorSplit

Set how split tensors should be distributed across GPUs

grpAttnN

Set the group-attention factor

grpAttnW

Set the group-attention width

ropeFreqBase

Set the RoPE base frequency, used by NTK-aware scaling

ropeFreqScale

Set the RoPE frequency scaling factor, expands context by a factor of 1/N

yarnExtFactor

Set the YaRN extrapolation mix factor

yarnAttnFactor

Set the YaRN scale sqrt(t) or attention magnitude

yarnBetaFast

Set the YaRN low correction dim or beta

yarnBetaSlow

Set the YaRN high correction dim or alpha

yarnOrigCtx

Set the YaRN original context size of model

defragmentationThreshold

Set the KV cache defragmentation threshold

numaStrategy

Set optimization strategies that help on some NUMA systems (if available)

ropeScalingType

Set the RoPE frequency scaling method, defaults to linear unless specified by the model

poolingType

Set the pooling type for embeddings, use model default if unspecified

modelDraft

Set the draft model for speculative decoding

modelAlias

Set a model alias

lookupCacheStaticFilePath

Set path to static lookup cache to use for lookup decoding (not updated by generation)

lookupCacheDynamicFilePath

Set path to dynamic lookup cache to use for lookup decoding (updated by generation)

embedding

Whether to load model with embedding support

flashAttention

Whether to enable Flash Attention

inputPrefixBos

Whether to add prefix BOS to user inputs, preceding the –in-prefix string

useMmap

Whether to use memory-map model (faster load but may increase pageouts if not using mlock)

useMlock

Whether to force the system to keep model in RAM rather than swapping or compressing

noKvOffload

Whether to disable KV offload

systemPrompt

Set a system prompt to use

chatTemplate

The chat template to use

inputPrefix

Set the prompt to start generation with

inputSuffix

Set a suffix for infilling

cachePrompt

Whether to remember the prompt to avoid reprocessing it

nPredict

Set the number of tokens to predict

topK

Set top-k sampling

topP

Set top-p sampling

minP

Set min-p sampling

tfsZ

Set tail free sampling, parameter z

typicalP

Set locally typical sampling, parameter p

temperature

Set the temperature

dynatempRange

Set the dynamic temperature range

dynatempExponent

Set the dynamic temperature exponent

repeatLastN

Set the last n tokens to consider for penalties

repeatPenalty

Set the penalty of repeated sequences of tokens

frequencyPenalty

Set the repetition alpha frequency penalty

presencePenalty

Set the repetition alpha presence penalty

miroStat

Set MiroStat sampling strategies.

mirostatTau

Set the MiroStat target entropy, parameter tau

mirostatEta

Set the MiroStat learning rate, parameter eta

penalizeNl

Whether to penalize newline tokens

nKeep

Set the number of tokens to keep from the initial prompt

seed

Set the RNG seed

nProbs

Set the amount top tokens probabilities to output if greater than 0.

minKeep

Set the amount of tokens the samplers should return at least (0 = disabled)

grammar

Set BNF-like grammar to constrain generations

penaltyPrompt

Override which part of the prompt is penalized for repetition.

ignoreEos

Set whether to ignore end of stream token and continue generating (implies –logit-bias 2-inf)

disableTokenIds

Set the token ids to disable in the completion

stopStrings

Set strings upon seeing which token generation is stopped

samplers

Set which samplers to use for token generation in the given order

useChatTemplate

Set whether or not generate should apply a chat template

Notes

To use GPU inference with this annotator, make sure to use the Spark NLP GPU package and set the number of GPU layers with the setNGpuLayers method.

When using larger models, we recommend adjusting GPU usage with setNCtx and setNGpuLayers according to your hardware to avoid out-of-memory errors.

References

  • `Denoising Sequence-to-Sequence Pre-training for Natural Language Generation, Translation, and Comprehension

Paper Abstract: We present BART, a denoising autoencoder for pretraining sequence-to-sequence models. BART is trained by (1) corrupting text with an arbitrary noising function, and (2) learning a model to reconstruct the original text. It uses a standard Tranformer-based neural machine translation architecture which, despite its simplicity, can be seen as generalizing BERT (due to the bidirectional encoder), GPT (with the left-to-right decoder), and many other more recent pretraining schemes. We evaluate a number of noising approaches, finding the best performance by both randomly shuffling the order of the original sentences and using a novel in-filling scheme, where spans of text are replaced with a single mask token. BART is particularly effective when fine tuned for text generation but also works well for comprehension tasks. It matches the performance of RoBERTa with comparable training resources on GLUE and SQuAD, achieves new state-of-the-art results on a range of abstractive dialogue, question answering, and summarization tasks, with gains of up to 6 ROUGE. BART also provides a 1.1 BLEU increase over a back-translation system for machine translation, with only target language pretraining. We also report ablation experiments that replicate other pretraining schemes within the BART framework, to better measure which factors most influence end-task performance.

Examples

>>> import sparknlp
>>> from sparknlp.base import *
>>> from sparknlp.annotator import *
>>> from pyspark.ml import Pipeline
>>> document = DocumentAssembler() \
...     .setInputCol("text") \
...     .setOutputCol("document")
>>> autoGGUFModel = AutoGGUFModel.pretrained() \
...     .setInputCols(["document"]) \
...     .setOutputCol("completions") \
...     .setBatchSize(4) \
...     .setNPredict(20) \
...     .setNGpuLayers(99) \
...     .setTemperature(0.4) \
...     .setTopK(40) \
...     .setTopP(0.9) \
...     .setPenalizeNl(True)
>>> pipeline = Pipeline().setStages([document, autoGGUFModel])
>>> data = spark.createDataFrame([["Hello, I am a"]]).toDF("text")
>>> result = pipeline.fit(data).transform(data)
>>> result.select("completions").show(truncate = False)
+-----------------------------------------------------------------------------------------------------------------------------------+
|completions                                                                                                                        |
+-----------------------------------------------------------------------------------------------------------------------------------+
|[{document, 0, 78,  new user.  I am currently working on a project and I need to create a list of , {prompt -> Hello, I am a}, []}]|
+-----------------------------------------------------------------------------------------------------------------------------------+
setNThreads(nThreads: int)[source]#

Set the number of threads to use during generation

setNThreadsDraft(nThreadsDraft: int)[source]#

Set the number of threads to use during draft generation

setNThreadsBatch(nThreadsBatch: int)[source]#

Set the number of threads to use during batch and prompt processing

setNThreadsBatchDraft(nThreadsBatchDraft: int)[source]#

Set the number of threads to use during batch and prompt processing

setNCtx(nCtx: int)[source]#

Set the size of the prompt context

setNBatch(nBatch: int)[source]#

Set the logical batch size for prompt processing (must be >=32 to use BLAS)

setNUbatch(nUbatch: int)[source]#

Set the physical batch size for prompt processing (must be >=32 to use BLAS)

setNDraft(nDraft: int)[source]#

Set the number of tokens to draft for speculative decoding

setNChunks(nChunks: int)[source]#

Set the maximal number of chunks to process

setNSequences(nSequences: int)[source]#

Set the number of sequences to decode

setPSplit(pSplit: float)[source]#

Set the speculative decoding split probability

setNGpuLayers(nGpuLayers: int)[source]#

Set the number of layers to store in VRAM (-1 - use default)

setNGpuLayersDraft(nGpuLayersDraft: int)[source]#

Set the number of layers to store in VRAM for the draft model (-1 - use default)

setGpuSplitMode(gpuSplitMode: str)[source]#

Set how to split the model across GPUs

setMainGpu(mainGpu: int)[source]#

Set the main GPU that is used for scratch and small tensors.

setTensorSplit(tensorSplit: List[float])[source]#

Set how split tensors should be distributed across GPUs

setGrpAttnN(grpAttnN: int)[source]#

Set the group-attention factor

setGrpAttnW(grpAttnW: int)[source]#

Set the group-attention width

setRopeFreqBase(ropeFreqBase: float)[source]#

Set the RoPE base frequency, used by NTK-aware scaling

setRopeFreqScale(ropeFreqScale: float)[source]#

Set the RoPE frequency scaling factor, expands context by a factor of 1/N

setYarnExtFactor(yarnExtFactor: float)[source]#

Set the YaRN extrapolation mix factor

setYarnAttnFactor(yarnAttnFactor: float)[source]#

Set the YaRN scale sqrt(t) or attention magnitude

setYarnBetaFast(yarnBetaFast: float)[source]#

Set the YaRN low correction dim or beta

setYarnBetaSlow(yarnBetaSlow: float)[source]#

Set the YaRN high correction dim or alpha

setYarnOrigCtx(yarnOrigCtx: int)[source]#

Set the YaRN original context size of model

setDefragmentationThreshold(defragmentationThreshold: float)[source]#

Set the KV cache defragmentation threshold

setNumaStrategy(numaStrategy: str)[source]#

Set optimization strategies that help on some NUMA systems (if available)

setRopeScalingType(ropeScalingType: str)[source]#

Set the RoPE frequency scaling method, defaults to linear unless specified by the model

setPoolingType(poolingType: bool)[source]#

Set the pooling type for embeddings, use model default if unspecified

setModelDraft(modelDraft: str)[source]#

Set the draft model for speculative decoding

setModelAlias(modelAlias: str)[source]#

Set a model alias

setLookupCacheStaticFilePath(lookupCacheStaticFilePath: str)[source]#

Set path to static lookup cache to use for lookup decoding (not updated by generation)

setLookupCacheDynamicFilePath(lookupCacheDynamicFilePath: str)[source]#

Set path to dynamic lookup cache to use for lookup decoding (updated by generation)

setEmbedding(embedding: bool)[source]#

Whether to load model with embedding support

setFlashAttention(flashAttention: bool)[source]#

Whether to enable Flash Attention

setInputPrefixBos(inputPrefixBos: bool)[source]#

Whether to add prefix BOS to user inputs, preceding the –in-prefix bool

setUseMmap(useMmap: bool)[source]#

Whether to use memory-map model (faster load but may increase pageouts if not using mlock)

setUseMlock(useMlock: bool)[source]#

Whether to force the system to keep model in RAM rather than swapping or compressing

setNoKvOffload(noKvOffload: bool)[source]#

Whether to disable KV offload

setSystemPrompt(systemPrompt: bool)[source]#

Set a system prompt to use

setChatTemplate(chatTemplate: str)[source]#

The chat template to use

setInputPrefix(inputPrefix: str)[source]#

Set the prompt to start generation with

setInputSuffix(inputSuffix: str)[source]#

Set a suffix for infilling

setCachePrompt(cachePrompt: bool)[source]#

Whether to remember the prompt to avoid reprocessing it

setNPredict(nPredict: int)[source]#

Set the number of tokens to predict

setTopK(topK: int)[source]#

Set top-k sampling

setTopP(topP: float)[source]#

Set top-p sampling

setMinP(minP: float)[source]#

Set min-p sampling

setTfsZ(tfsZ: float)[source]#

Set tail free sampling, parameter z

setTypicalP(typicalP: float)[source]#

Set locally typical sampling, parameter p

setTemperature(temperature: float)[source]#

Set the temperature

setDynamicTemperatureRange(dynamicTemperatureRange: float)[source]#

Set the dynamic temperature range

setDynamicTemperatureExponent(dynamicTemperatureExponent: float)[source]#

Set the dynamic temperature exponent

setRepeatLastN(repeatLastN: int)[source]#

Set the last n tokens to consider for penalties

setRepeatPenalty(repeatPenalty: float)[source]#

Set the penalty of repeated sequences of tokens

setFrequencyPenalty(frequencyPenalty: float)[source]#

Set the repetition alpha frequency penalty

setPresencePenalty(presencePenalty: float)[source]#

Set the repetition alpha presence penalty

setMiroStat(miroStat: str)[source]#

Set MiroStat sampling strategies.

setMiroStatTau(miroStatTau: float)[source]#

Set the MiroStat target entropy, parameter tau

setMiroStatEta(miroStatEta: float)[source]#

Set the MiroStat learning rate, parameter eta

setPenalizeNl(penalizeNl: bool)[source]#

Whether to penalize newline tokens

setNKeep(nKeep: int)[source]#

Set the number of tokens to keep from the initial prompt

setSeed(seed: int)[source]#

Set the RNG seed

setNProbs(nProbs: int)[source]#

Set the amount top tokens probabilities to output if greater than 0.

setMinKeep(minKeep: int)[source]#

Set the amount of tokens the samplers should return at least (0 = disabled)

setGrammar(grammar: bool)[source]#

Set BNF-like grammar to constrain generations

setPenaltyPrompt(penaltyPrompt: str)[source]#

Override which part of the prompt is penalized for repetition.

setIgnoreEos(ignoreEos: bool)[source]#

Set whether to ignore end of stream token and continue generating (implies –logit-bias 2-inf)

setDisableTokenIds(disableTokenIds: List[int])[source]#

Set the token ids to disable in the completion

setStopStrings(stopStrings: List[str])[source]#

Set strings upon seeing which token generation is stopped

setSamplers(samplers: List[str])[source]#

Set which samplers to use for token generation in the given order

setUseChatTemplate(useChatTemplate: bool)[source]#

Set whether generate should apply a chat template

setTokenIdBias(tokenIdBias: Dict[int, float])[source]#

Set token id bias

setTokenBias(tokenBias: Dict[str, float])[source]#

Set token id bias

setLoraAdapters(loraAdapters: Dict[str, float])[source]#

Set token id bias

getMetadata()[source]#

Gets the metadata of the model

static loadSavedModel(folder, spark_session)[source]#

Loads a locally saved model.

Parameters:
folderstr

Folder of the saved model

spark_sessionpyspark.sql.SparkSession

The current SparkSession

Returns:
AutoGGUFModel

The restored model

static pretrained(name='phi3.5_mini_4k_instruct_q4_gguf', lang='en', remote_loc=None)[source]#

Downloads and loads a pretrained model.

Parameters:
namestr, optional

Name of the pretrained model, by default “phi3.5_mini_4k_instruct_q4_gguf”

langstr, optional

Language of the pretrained model, by default “en”

remote_locstr, optional

Optional remote address of the resource, by default None. Will use Spark NLPs repositories otherwise.

Returns:
AutoGGUFModel

The restored model