sparknlp.annotator.seq2seq.llama2_transformer#

Contains classes for the LLAMA2Transformer.

Module Contents#

Classes#

LLAMA2Transformer

Llama 2: Open Foundation and Fine-Tuned Chat Models

class LLAMA2Transformer(classname='com.johnsnowlabs.nlp.annotators.seq2seq.LLAMA2Transformer', java_model=None)[source]#

Llama 2: Open Foundation and Fine-Tuned Chat Models

The Llama 2 release introduces a family of pretrained and fine-tuned LLMs, ranging in scale from 7B to 70B parameters (7B, 13B, 70B). The pretrained models come with significant improvements over the Llama 1 models, including being trained on 40% more tokens, having a much longer context length (4k tokens 🤯), and using grouped-query attention for fast inference of the 70B model🔥!

However, the most exciting part of this release is the fine-tuned models (Llama 2-Chat), which have been optimized for dialogue applications using Reinforcement Learning from Human Feedback (RLHF). Across a wide range of helpfulness and safety benchmarks, the Llama 2-Chat models perform better than most open models and achieve comparable performance to ChatGPT according to human evaluations.

Pretrained models can be loaded with pretrained() of the companion object:

>>> llama2 = LLAMA2Transformer.pretrained() \
...     .setInputCols(["document"]) \
...     .setOutputCol("generation")

The default model is "llam2-7b", if no name is provided. For available pretrained models please see the Models Hub.

Input Annotation types

Output Annotation type

DOCUMENT

DOCUMENT

Parameters:
configProtoBytes

ConfigProto from tensorflow, serialized into byte array.

minOutputLength

Minimum length of the sequence to be generated, by default 0

maxOutputLength

Maximum length of output text, by default 20

doSample

Whether or not to use sampling; use greedy decoding otherwise, by default False

temperature

The value used to module the next token probabilities, by default 1.0

topK

The number of highest probability vocabulary tokens to keep for top-k-filtering, by default 50

topP

Top cumulative probability for vocabulary tokens, by default 1.0

If set to float < 1, only the most probable tokens with probabilities that add up to topP or higher are kept for generation.

repetitionPenalty

The parameter for repetition penalty, 1.0 means no penalty. , by default 1.0

noRepeatNgramSize

If set to int > 0, all ngrams of that size can only occur once, by default 0

ignoreTokenIds

A list of token ids which are ignored in the decoder’s output, by default []

Notes

This is a very computationally expensive module especially on larger sequence. The use of an accelerator such as GPU is recommended.

References

Paper Abstract:

In this work, we develop and release Llama 2, a collection of pretrained and fine-tuned large language models (LLMs) ranging in scale from 7 billion to 70 billion parameters. Our fine-tuned LLMs, called Llama 2-Chat, are optimized for dialogue use cases. Our models outperform open-source chat models on most benchmarks we tested, and based on our human evaluations for helpfulness and safety, may be a suitable substitute for closed-source models. We provide a detailed description of our approach to fine-tuning and safety improvements of Llama 2-Chat in order to enable the community to build on our work and contribute to the responsible development of LLMs.

Examples

>>> import sparknlp
>>> from sparknlp.base import *
>>> from sparknlp.annotator import *
>>> from pyspark.ml import Pipeline
>>> documentAssembler = DocumentAssembler() \
...     .setInputCol("text") \
...     .setOutputCol("documents")
>>> llama2 = LLAMA2Transformer.pretrained("llama_2_7b_chat_hf_int4") \
...     .setInputCols(["documents"]) \
...     .setMaxOutputLength(50) \
...     .setOutputCol("generation")
>>> pipeline = Pipeline().setStages([documentAssembler, llama2])
>>> data = spark.createDataFrame([["My name is Leonardo."]]).toDF("text")
>>> result = pipeline.fit(data).transform(data)
>>> result.select("summaries.generation").show(truncate=False)
+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
|result                                                                                                                                                                                              |
+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
|[My name is Leonardo. I am a man of letters. I have been a man for many years. I was born in the year 1776. I came to the United States in 1776, and I have lived in the United Kingdom since 1776.]|
-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
setIgnoreTokenIds(value)[source]#

A list of token ids which are ignored in the decoder’s output.

Parameters:
valueList[int]

The words to be filtered out

setConfigProtoBytes(b)[source]#

Sets configProto from tensorflow, serialized into byte array.

Parameters:
bList[int]

ConfigProto from tensorflow, serialized into byte array

setMinOutputLength(value)[source]#

Sets minimum length of the sequence to be generated.

Parameters:
valueint

Minimum length of the sequence to be generated

setMaxOutputLength(value)[source]#

Sets maximum length of output text.

Parameters:
valueint

Maximum length of output text

setDoSample(value)[source]#

Sets whether or not to use sampling, use greedy decoding otherwise.

Parameters:
valuebool

Whether or not to use sampling; use greedy decoding otherwise

setTemperature(value)[source]#

Sets the value used to module the next token probabilities.

Parameters:
valuefloat

The value used to module the next token probabilities

setTopK(value)[source]#

Sets the number of highest probability vocabulary tokens to keep for top-k-filtering.

Parameters:
valueint

Number of highest probability vocabulary tokens to keep

setTopP(value)[source]#

Sets the top cumulative probability for vocabulary tokens.

If set to float < 1, only the most probable tokens with probabilities that add up to topP or higher are kept for generation.

Parameters:
valuefloat

Cumulative probability for vocabulary tokens

setRepetitionPenalty(value)[source]#

Sets the parameter for repetition penalty. 1.0 means no penalty.

Parameters:
valuefloat

The repetition penalty

References

See Ctrl: A Conditional Transformer Language Model For Controllable Generation for more details.

setNoRepeatNgramSize(value)[source]#

Sets size of n-grams that can only occur once.

If set to int > 0, all ngrams of that size can only occur once.

Parameters:
valueint

N-gram size can only occur once

static loadSavedModel(folder, spark_session, use_openvino=False)[source]#

Loads a locally saved model.

Parameters:
folderstr

Folder of the saved model

spark_sessionpyspark.sql.SparkSession

The current SparkSession

Returns:
LLAMA2Transformer

The restored model

static pretrained(name='llama_2_7b_chat_hf_int4', lang='en', remote_loc=None)[source]#

Downloads and loads a pretrained model.

Parameters:
namestr, optional

Name of the pretrained model, by default “llama_2_7b_chat_hf_int4”

langstr, optional

Language of the pretrained model, by default “en”

remote_locstr, optional

Optional remote address of the resource, by default None. Will use Spark NLPs repositories otherwise.

Returns:
LLAMA2Transformer

The restored model