sparknlp.annotator.seq2seq.cpm_transformer
#
Contains classes for the CPMTransformer.
Module Contents#
Classes#
MiniCPM: Unveiling the Potential of End-side Large Language Models |
- class CPMTransformer(classname='com.johnsnowlabs.nlp.annotators.seq2seq.CPMTransformer', java_model=None)[source]#
MiniCPM: Unveiling the Potential of End-side Large Language Models
MiniCPM is a series of edge-side large language models, with the base model, MiniCPM-2B, having 2.4B non-embedding parameters. It ranks closely with Mistral-7B on comprehensive benchmarks (with better performance in Chinese, mathematics, and coding abilities), surpassing models like Llama2-13B, MPT-30B, and Falcon-40B. On the MTBench benchmark, which is closest to user experience, MiniCPM-2B also outperforms many representative open-source models such as Llama2-70B-Chat, Vicuna-33B, Mistral-7B-Instruct-v0.1, and Zephyr-7B-alpha.
After DPO, MiniCPM outperforms Llama2-70B-Chat, Vicuna-33B, Mistral-7B-Instruct-v0.1, Zephyr-7B-alpha, etc. on MTBench.
MiniCPM-V, based on MiniCPM-2B, achieves the best overall performance among multimodel models of the same scale, surpassing existing multimodal large models built on Phi-2 and achieving performance comparable to or even better than 9.6B Qwen-VL-Chat on some tasks.
MiniCPM can be deployed and infer on smartphones, and the speed of streaming output is relatively higher than the verbal speed of human.
Pretrained models can be loaded with
pretrained()
of the companion object:>>> cpm = CPMTransformer.pretrained() \ ... .setInputCols(["document"]) \ ... .setOutputCol("generation")
The default model is
"llam2-7b"
, if no name is provided. For available pretrained models please see the Models Hub.Input Annotation types
Output Annotation type
DOCUMENT
DOCUMENT
- Parameters:
- configProtoBytes
ConfigProto from tensorflow, serialized into byte array.
- minOutputLength
Minimum length of the sequence to be generated, by default 0
- maxOutputLength
Maximum length of output text, by default 20
- doSample
Whether or not to use sampling; use greedy decoding otherwise, by default False
- temperature
The value used to module the next token probabilities, by default 1.0
- topK
The number of highest probability vocabulary tokens to keep for top-k-filtering, by default 50
- topP
Top cumulative probability for vocabulary tokens, by default 1.0
If set to float < 1, only the most probable tokens with probabilities that add up to
topP
or higher are kept for generation.- repetitionPenalty
The parameter for repetition penalty, 1.0 means no penalty. , by default 1.0
- noRepeatNgramSize
If set to int > 0, all ngrams of that size can only occur once, by default 0
- ignoreTokenIds
A list of token ids which are ignored in the decoder’s output, by default []
Notes
This is a very computationally expensive module especially on larger sequence. The use of an accelerator such as GPU is recommended.
References
MiniCPM: Unveiling the Potential of End-side Large Language Models <https://shengdinghu.notion.site/MiniCPM-Unveiling-the-Potential-of-End-side-Large-Language-Models-d4d3a8c426424654a4e80e42a711cb20>
Examples
>>> import sparknlp >>> from sparknlp.base import * >>> from sparknlp.annotator import * >>> from pyspark.ml import Pipeline >>> documentAssembler = DocumentAssembler() \ ... .setInputCol("text") \ ... .setOutputCol("documents") >>> cpm = CPMTransformer.pretrained("llama_2_7b_chat_hf_int4") \ ... .setInputCols(["documents"]) \ ... .setMaxOutputLength(50) \ ... .setOutputCol("generation") >>> pipeline = Pipeline().setStages([documentAssembler, cpm]) >>> data = spark.createDataFrame([["My name is Leonardo."]]).toDF("text") >>> result = pipeline.fit(data).transform(data) >>> result.select("summaries.generation").show(truncate=False) +------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ |result | +------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ |[My name is Leonardo. I am a student at the University of California, Los Angeles. I have a passion for writing and learning about different cultures. I enjoy playing basketball and watching movies]| -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
- setIgnoreTokenIds(value)[source]#
A list of token ids which are ignored in the decoder’s output.
- Parameters:
- valueList[int]
The words to be filtered out
- setConfigProtoBytes(b)[source]#
Sets configProto from tensorflow, serialized into byte array.
- Parameters:
- bList[int]
ConfigProto from tensorflow, serialized into byte array
- setMinOutputLength(value)[source]#
Sets minimum length of the sequence to be generated.
- Parameters:
- valueint
Minimum length of the sequence to be generated
- setMaxOutputLength(value)[source]#
Sets maximum length of output text.
- Parameters:
- valueint
Maximum length of output text
- setDoSample(value)[source]#
Sets whether or not to use sampling, use greedy decoding otherwise.
- Parameters:
- valuebool
Whether or not to use sampling; use greedy decoding otherwise
- setTemperature(value)[source]#
Sets the value used to module the next token probabilities.
- Parameters:
- valuefloat
The value used to module the next token probabilities
- setTopK(value)[source]#
Sets the number of highest probability vocabulary tokens to keep for top-k-filtering.
- Parameters:
- valueint
Number of highest probability vocabulary tokens to keep
- setTopP(value)[source]#
Sets the top cumulative probability for vocabulary tokens.
If set to float < 1, only the most probable tokens with probabilities that add up to
topP
or higher are kept for generation.- Parameters:
- valuefloat
Cumulative probability for vocabulary tokens
- setRepetitionPenalty(value)[source]#
Sets the parameter for repetition penalty. 1.0 means no penalty.
- Parameters:
- valuefloat
The repetition penalty
References
See Ctrl: A Conditional Transformer Language Model For Controllable Generation for more details.
- setNoRepeatNgramSize(value)[source]#
Sets size of n-grams that can only occur once.
If set to int > 0, all ngrams of that size can only occur once.
- Parameters:
- valueint
N-gram size can only occur once
- static loadSavedModel(folder, spark_session, use_openvino=False)[source]#
Loads a locally saved model.
- Parameters:
- folderstr
Folder of the saved model
- spark_sessionpyspark.sql.SparkSession
The current SparkSession
- Returns:
- CPMTransformer
The restored model
- static pretrained(name='llama_2_7b_chat_hf_int4', lang='en', remote_loc=None)[source]#
Downloads and loads a pretrained model.
- Parameters:
- namestr, optional
Name of the pretrained model, by default “llama_2_7b_chat_hf_int4”
- langstr, optional
Language of the pretrained model, by default “en”
- remote_locstr, optional
Optional remote address of the resource, by default None. Will use Spark NLPs repositories otherwise.
- Returns:
- CPMTransformer
The restored model