Contains classes for Annotator properties.
Module Contents
Classes
-
class HasBatchedAnnotate[source]
-
batchSize[source]
-
setBatchSize(v)[source]
Sets batch size.
- Parameters:
- vint
Batch size
-
getBatchSize()[source]
Gets current batch size.
- Returns:
- int
Current batch size
-
class HasCaseSensitiveProperties[source]
-
caseSensitive[source]
-
setCaseSensitive(value)[source]
Sets whether to ignore case in tokens for embeddings matching.
- Parameters:
- valuebool
Whether to ignore case in tokens for embeddings matching
-
getCaseSensitive()[source]
Gets whether to ignore case in tokens for embeddings matching.
- Returns:
- bool
Whether to ignore case in tokens for embeddings matching
-
class HasClsTokenProperties[source]
-
useCLSToken[source]
-
setUseCLSToken(value)[source]
Sets whether to ignore case in tokens for embeddings matching.
- Parameters:
- valuebool
Whether to use CLS token for pooling (true) or attention-based average pooling (false)
-
getUseCLSToken()[source]
Gets whether to use CLS token for pooling (true) or attention-based average pooling (false)
- Returns:
- bool
Whether to use CLS token for pooling (true) or attention-based average pooling (false)
-
class HasClassifierActivationProperties[source]
-
activation[source]
-
multilabel[source]
-
threshold[source]
-
setActivation(value)[source]
Sets whether to calculate logits via Softmax or Sigmoid. Default is Softmax
- Parameters:
- valuestr
Whether to calculate logits via Softmax or Sigmoid. Default is Softmax
-
getActivation()[source]
Gets whether to calculate logits via Softmax or Sigmoid. Default is Softmax
- Returns:
- str
Whether to calculate logits via Softmax or Sigmoid. Default is Softmax
-
setMultilabel(value)[source]
- Set whether or not the result should be multi-class (the sum of all probabilities is 1.0) or
multi-label (each label has a probability between 0.0 to 1.0).
Default is False i.e. multi-class
- Parameters:
- valuebool
Whether or not the result should be multi-class (the sum of all probabilities is 1.0) or
multi-label (each label has a probability between 0.0 to 1.0).
Default is False i.e. multi-class
-
getMultilabel()[source]
- Gets whether or not the result should be multi-class (the sum of all probabilities is 1.0) or
multi-label (each label has a probability between 0.0 to 1.0).
Default is False i.e. multi-class
- Parameters:
- valuebool
Whether or not the result should be multi-class (the sum of all probabilities is 1.0) or
multi-label (each label has a probability between 0.0 to 1.0).
Default is False i.e. multi-class
-
setThreshold(value)[source]
- Set the threshold to determine which logits are considered to be positive or negative.
(Default: 0.5). The value should be between 0.0 and 1.0. Changing the threshold value
will affect the resulting labels and can be used to adjust the balance between precision and
recall in the classification process.
- Parameters:
- valuefloat
The threshold to determine which logits are considered to be positive or negative.
(Default: 0.5). The value should be between 0.0 and 1.0. Changing the threshold value
will affect the resulting labels and can be used to adjust the balance between precision and
recall in the classification process.
-
class HasEmbeddingsProperties[source]
Components that take parameters. This also provides an internal
param map to store parameter values attached to the instance.
-
dimension[source]
-
setDimension(value)[source]
Sets embeddings dimension.
- Parameters:
- valueint
Embeddings dimension
-
getDimension()[source]
Gets embeddings dimension.
-
class HasEnableCachingProperties[source]
-
enableCaching[source]
-
setEnableCaching(value)[source]
Sets whether to enable caching DataFrames or RDDs during the training
- Parameters:
- valuebool
Whether to enable caching DataFrames or RDDs during the training
-
getEnableCaching()[source]
Gets whether to enable caching DataFrames or RDDs during the training
- Returns:
- bool
Whether to enable caching DataFrames or RDDs during the training
-
class HasBatchedAnnotateImage[source]
-
batchSize[source]
-
setBatchSize(v)[source]
Sets batch size.
- Parameters:
- vint
Batch size
-
getBatchSize()[source]
Gets current batch size.
- Returns:
- int
Current batch size
-
class HasImageFeatureProperties[source]
-
doResize[source]
-
doNormalize[source]
-
imageMean[source]
-
imageStd[source]
-
resample[source]
-
size[source]
-
setDoResize(value)[source]
- Parameters:
- valueBoolean
Whether to resize the input to a certain size
-
setDoNormalize(value)[source]
- Parameters:
- valueBoolean
Whether to normalize the input with mean and standard deviation
- Parameters:
- valuestr
Name of model’s architecture for feature extraction
-
setImageStd(value)[source]
- Parameters:
- valueList[float]
The sequence of standard deviations for each channel, to be used when normalizing images
-
setImageMean(value)[source]
- Parameters:
- valueList[float]
The sequence of means for each channel, to be used when normalizing images
-
setResample(value)[source]
- Parameters:
- valueint
Resampling filter for resizing. This can be one of PIL.Image.NEAREST, PIL.Image.BILINEAR or
PIL.Image.BICUBIC. Only has an effect if do_resize is set to True.
-
setSize(value)[source]
- Parameters:
- valueint
Resize the input to the given size. If a tuple is provided, it should be (width, height).
-
class HasRescaleFactor[source]
-
doRescale[source]
-
rescaleFactor[source]
-
setDoRescale(value)[source]
Sets Whether to rescale the image values by rescaleFactor, by default True.
- Parameters:
- valueBoolean
Whether to rescale the image values by rescaleFactor.
-
setRescaleFactor(value)[source]
Sets Factor to scale the image values, by default 1/255.0.
- Parameters:
- valueBoolean
Whether to rescale the image values by rescaleFactor.
-
class HasBatchedAnnotateAudio[source]
-
batchSize[source]
-
setBatchSize(v)[source]
Sets batch size.
- Parameters:
- vint
Batch size
-
getBatchSize()[source]
Gets current batch size.
- Returns:
- int
Current batch size
-
class HasAudioFeatureProperties[source]
-
doNormalize[source]
-
returnAttentionMask[source]
-
paddingSide[source]
-
featureSize[source]
-
samplingRate[source]
-
paddingValue[source]
-
setDoNormalize(value)[source]
- Parameters:
- valueBoolean
Whether to normalize the input with mean and standard deviation
-
setReturnAttentionMask(value)[source]
- Parameters:
- valueboolean
-
setPaddingSide(value)[source]
- Parameters:
- valuestr
-
setFeatureSize(value)[source]
- Parameters:
- valueint
-
setSamplingRate(value)[source]
- Parameters:
- valueInt
-
setPaddingValue(value)[source]
- Parameters:
- valuefloat
-
class HasEngine[source]
-
engine[source]
-
getEngine()[source]
- Returns:
- str
Deep Learning engine used for this model”
-
class HasCandidateLabelsProperties[source]
-
candidateLabels[source]
-
contradictionIdParam[source]
-
entailmentIdParam[source]
-
setCandidateLabels(v)[source]
Sets candidateLabels.
- Parameters:
- vlist[string]
candidateLabels
-
setContradictionIdParam(v)[source]
Sets contradictionIdParam.
- Parameters:
- vint
contradictionIdParam
-
setEntailmentIdParam(v)[source]
Sets entailmentIdParam.
- Parameters:
- vint
entailmentIdParam
-
class HasMaxSentenceLengthLimit[source]
-
max_length_limit = 512[source]
-
maxSentenceLength[source]
-
setMaxSentenceLength(value)[source]
Sets max sentence length to process.
Note that a maximum limit exists depending on the model. If you are working with long single
sequences, consider splitting up the input first with another annotator e.g. SentenceDetector.
- Parameters:
- valueint
Max sentence length to process
-
getMaxSentenceLength()[source]
Gets max sentence of the model.
- Returns:
- int
Max sentence length to process
-
class HasLongMaxSentenceLengthLimit[source]
-
max_length_limit = 4096[source]
-
class HasGeneratorProperties[source]
-
task[source]
-
minOutputLength[source]
-
maxOutputLength[source]
-
doSample[source]
-
temperature[source]
-
topK[source]
-
topP[source]
-
repetitionPenalty[source]
-
noRepeatNgramSize[source]
-
beamSize[source]
-
nReturnSequences[source]
-
setTask(value)[source]
Sets the transformer’s task, e.g. summarize:
.
- Parameters:
- valuestr
The transformer’s task
-
setMinOutputLength(value)[source]
Sets minimum length of the sequence to be generated.
- Parameters:
- valueint
Minimum length of the sequence to be generated
-
setMaxOutputLength(value)[source]
Sets maximum length of output text.
- Parameters:
- valueint
Maximum length of output text
-
setDoSample(value)[source]
Sets whether or not to use sampling, use greedy decoding otherwise.
- Parameters:
- valuebool
Whether or not to use sampling; use greedy decoding otherwise
-
setTemperature(value)[source]
Sets the value used to module the next token probabilities.
- Parameters:
- valuefloat
The value used to module the next token probabilities
-
setTopK(value)[source]
Sets the number of highest probability vocabulary tokens to keep for
top-k-filtering.
- Parameters:
- valueint
Number of highest probability vocabulary tokens to keep
-
setTopP(value)[source]
Sets the top cumulative probability for vocabulary tokens.
If set to float < 1, only the most probable tokens with probabilities
that add up to topP
or higher are kept for generation.
- Parameters:
- valuefloat
Cumulative probability for vocabulary tokens
-
setRepetitionPenalty(value)[source]
Sets the parameter for repetition penalty. 1.0 means no penalty.
- Parameters:
- valuefloat
The repetition penalty
References
See Ctrl: A Conditional Transformer Language Model For Controllable
Generation for more details.
-
setNoRepeatNgramSize(value)[source]
Sets size of n-grams that can only occur once.
If set to int > 0, all ngrams of that size can only occur once.
- Parameters:
- valueint
N-gram size can only occur once
-
setBeamSize(value)[source]
Sets the number of beam size for beam search.
- Parameters:
- valueint
Number of beam size for beam search
-
setNReturnSequences(value)[source]
Sets the number of sequences to return from the beam search.
- Parameters:
- valueint
Number of sequences to return
-
class HasLlamaCppProperties[source]
-
nThreads[source]
-
nThreadsBatch[source]
-
nCtx[source]
-
nBatch[source]
-
nUbatch[source]
-
nDraft[source]
-
nGpuLayers[source]
-
nGpuLayersDraft[source]
-
gpuSplitMode[source]
-
mainGpu[source]
-
ropeFreqBase[source]
-
ropeFreqScale[source]
-
yarnExtFactor[source]
-
yarnAttnFactor[source]
-
yarnBetaFast[source]
-
yarnBetaSlow[source]
-
yarnOrigCtx[source]
-
defragmentationThreshold[source]
-
numaStrategy[source]
-
ropeScalingType[source]
-
poolingType[source]
-
modelDraft[source]
-
modelAlias[source]
-
embedding[source]
-
flashAttention[source]
-
useMmap[source]
-
useMlock[source]
-
noKvOffload[source]
-
systemPrompt[source]
-
chatTemplate[source]
-
inputPrefix[source]
-
inputSuffix[source]
-
cachePrompt[source]
-
nPredict[source]
-
topK[source]
-
topP[source]
-
minP[source]
-
tfsZ[source]
-
typicalP[source]
-
temperature[source]
-
dynamicTemperatureRange[source]
-
dynamicTemperatureExponent[source]
-
repeatLastN[source]
-
repeatPenalty[source]
-
frequencyPenalty[source]
-
presencePenalty[source]
-
miroStat[source]
-
miroStatTau[source]
-
miroStatEta[source]
-
penalizeNl[source]
-
nKeep[source]
-
seed[source]
-
nProbs[source]
-
minKeep[source]
-
grammar[source]
-
penaltyPrompt[source]
-
ignoreEos[source]
-
disableTokenIds[source]
-
stopStrings[source]
-
samplers[source]
-
useChatTemplate[source]
-
setNThreads(nThreads: int)[source]
Set the number of threads to use during generation
-
setNThreadsBatch(nThreadsBatch: int)[source]
Set the number of threads to use during batch and prompt processing
-
setNCtx(nCtx: int)[source]
Set the size of the prompt context
-
setNBatch(nBatch: int)[source]
Set the logical batch size for prompt processing (must be >=32 to use BLAS)
-
setNUbatch(nUbatch: int)[source]
Set the physical batch size for prompt processing (must be >=32 to use BLAS)
-
setNDraft(nDraft: int)[source]
Set the number of tokens to draft for speculative decoding
-
setNGpuLayers(nGpuLayers: int)[source]
Set the number of layers to store in VRAM (-1 - use default)
-
setNGpuLayersDraft(nGpuLayersDraft: int)[source]
Set the number of layers to store in VRAM for the draft model (-1 - use default)
-
setGpuSplitMode(gpuSplitMode: str)[source]
Set how to split the model across GPUs
-
setMainGpu(mainGpu: int)[source]
Set the main GPU that is used for scratch and small tensors.
-
setRopeFreqBase(ropeFreqBase: float)[source]
Set the RoPE base frequency, used by NTK-aware scaling
-
setRopeFreqScale(ropeFreqScale: float)[source]
Set the RoPE frequency scaling factor, expands context by a factor of 1/N
-
setYarnExtFactor(yarnExtFactor: float)[source]
Set the YaRN extrapolation mix factor
-
setYarnAttnFactor(yarnAttnFactor: float)[source]
Set the YaRN scale sqrt(t) or attention magnitude
-
setYarnBetaFast(yarnBetaFast: float)[source]
Set the YaRN low correction dim or beta
-
setYarnBetaSlow(yarnBetaSlow: float)[source]
Set the YaRN high correction dim or alpha
-
setYarnOrigCtx(yarnOrigCtx: int)[source]
Set the YaRN original context size of model
-
setDefragmentationThreshold(defragmentationThreshold: float)[source]
Set the KV cache defragmentation threshold
-
setNumaStrategy(numaStrategy: str)[source]
Set optimization strategies that help on some NUMA systems (if available)
Possible values:
DISABLED: No NUMA optimizations
DISTRIBUTE: spread execution evenly over all
ISOLATE: only spawn threads on CPUs on the node that execution started on
NUMA_CTL: use the CPU map provided by numactl
MIRROR: Mirrors the model across NUMA nodes
-
setRopeScalingType(ropeScalingType: str)[source]
Set the RoPE frequency scaling method, defaults to linear unless specified by the model.
Possible values:
-
setPoolingType(poolingType: str)[source]
Set the pooling type for embeddings, use model default if unspecified
Possible values:
0 NONE: Don’t use any pooling
1 MEAN: Mean Pooling
2 CLS: CLS Pooling
3 LAST: Last token pooling
4 RANK: For reranked models
-
setModelDraft(modelDraft: str)[source]
Set the draft model for speculative decoding
-
setModelAlias(modelAlias: str)[source]
Set a model alias
-
setEmbedding(embedding: bool)[source]
Whether to load model with embedding support
-
setFlashAttention(flashAttention: bool)[source]
Whether to enable Flash Attention
-
setUseMmap(useMmap: bool)[source]
Whether to use memory-map model (faster load but may increase pageouts if not using mlock)
-
setUseMlock(useMlock: bool)[source]
Whether to force the system to keep model in RAM rather than swapping or compressing
-
setNoKvOffload(noKvOffload: bool)[source]
Whether to disable KV offload
-
setSystemPrompt(systemPrompt: str)[source]
Set a system prompt to use
-
setChatTemplate(chatTemplate: str)[source]
The chat template to use
-
setInputPrefix(inputPrefix: str)[source]
Set the prompt to start generation with
-
setInputSuffix(inputSuffix: str)[source]
Set a suffix for infilling
-
setCachePrompt(cachePrompt: bool)[source]
Whether to remember the prompt to avoid reprocessing it
-
setNPredict(nPredict: int)[source]
Set the number of tokens to predict
-
setTopK(topK: int)[source]
Set top-k sampling
-
setTopP(topP: float)[source]
Set top-p sampling
-
setMinP(minP: float)[source]
Set min-p sampling
-
setTfsZ(tfsZ: float)[source]
Set tail free sampling, parameter z
-
setTypicalP(typicalP: float)[source]
Set locally typical sampling, parameter p
-
setTemperature(temperature: float)[source]
Set the temperature
-
setDynamicTemperatureRange(dynamicTemperatureRange: float)[source]
Set the dynamic temperature range
-
setDynamicTemperatureExponent(dynamicTemperatureExponent: float)[source]
Set the dynamic temperature exponent
-
setRepeatLastN(repeatLastN: int)[source]
Set the last n tokens to consider for penalties
-
setRepeatPenalty(repeatPenalty: float)[source]
Set the penalty of repeated sequences of tokens
-
setFrequencyPenalty(frequencyPenalty: float)[source]
Set the repetition alpha frequency penalty
-
setPresencePenalty(presencePenalty: float)[source]
Set the repetition alpha presence penalty
-
setMiroStat(miroStat: str)[source]
Set MiroStat sampling strategies.
-
setMiroStatTau(miroStatTau: float)[source]
Set the MiroStat target entropy, parameter tau
-
setMiroStatEta(miroStatEta: float)[source]
Set the MiroStat learning rate, parameter eta
-
setPenalizeNl(penalizeNl: bool)[source]
Whether to penalize newline tokens
-
setNKeep(nKeep: int)[source]
Set the number of tokens to keep from the initial prompt
-
setSeed(seed: int)[source]
Set the RNG seed
-
setNProbs(nProbs: int)[source]
Set the amount top tokens probabilities to output if greater than 0.
-
setMinKeep(minKeep: int)[source]
Set the amount of tokens the samplers should return at least (0 = disabled)
-
setGrammar(grammar: str)[source]
Set BNF-like grammar to constrain generations
-
setPenaltyPrompt(penaltyPrompt: str)[source]
Override which part of the prompt is penalized for repetition.
-
setIgnoreEos(ignoreEos: bool)[source]
Set whether to ignore end of stream token and continue generating (implies –logit-bias 2-inf)
-
setDisableTokenIds(disableTokenIds: List[int])[source]
Set the token ids to disable in the completion
-
setStopStrings(stopStrings: List[str])[source]
Set strings upon seeing which token generation is stopped
-
setSamplers(samplers: List[str])[source]
Set which samplers to use for token generation in the given order
-
setUseChatTemplate(useChatTemplate: bool)[source]
Set whether generate should apply a chat template
-
setNParallel(nParallel: int)[source]
Sets the number of parallel processes for decoding. This is an alias for setBatchSize.
-
setTokenIdBias(tokenIdBias: Dict[int, float])[source]
Set token id bias
-
setTokenBias(tokenBias: Dict[str, float])[source]
Set token id bias
-
getMetadata()[source]
Gets the metadata of the model