sparknlp.annotator.seq2seq.bart_transformer#
Contains classes for the BartTransformer.
Module Contents#
Classes#
| BART: Denoising Sequence-to-Sequence Pre-training for Natural Language Generation, | 
- class BartTransformer(classname='com.johnsnowlabs.nlp.annotators.seq2seq.BartTransformer', java_model=None)[source]#
- BART: Denoising Sequence-to-Sequence Pre-training for Natural Language Generation, Translation, and Comprehension Transformer - The Facebook BART (Bidirectional and Auto-Regressive Transformer) model is a state-of-the-art language generation model that was introduced by Facebook AI in 2019. It is based on the transformer architecture and is designed to handle a wide range of natural language processing tasks such as text generation, summarization, and machine translation. - BART is unique in that it is both bidirectional and auto-regressive, meaning that it can generate text both from left-to-right and from right-to-left. This allows it to capture contextual information from both past and future tokens in a sentence,resulting in more accurate and natural language generation. - The model was trained on a large corpus of text data using a combination of unsupervised and supervised learning techniques. It incorporates pretraining and fine-tuning phases, where the model is first trained on a large unlabeled corpus of text, and then fine-tuned on specific downstream tasks. - BART has achieved state-of-the-art performance on a wide range of NLP tasks, including summarization, question-answering, and language translation. Its ability to handle multiple tasks and its high performance on each of these tasks make it a versatile and valuable tool for natural language processing applications. - Pretrained models can be loaded with - pretrained()of the companion object:- >>> bart = BartTransformer.pretrained() \ ... .setTask("summarize:") \ ... .setInputCols(["document"]) \ ... .setOutputCol("summaries") - The default model is - "distilbart_xsum_12_6", if no name is provided. For available pretrained models please see the Models Hub.- For extended examples of usage, see the BartTestSpec. - Input Annotation types - Output Annotation type - DOCUMENT- DOCUMENT- Parameters:
- batchSize
- Batch Size, by default 1. 
- configProtoBytes
- ConfigProto from tensorflow, serialized into byte array. 
- task
- Transformer’s task, e.g. - summarize:, by default “”.
- minOutputLength
- Minimum length of the sequence to be generated, by default 0. 
- maxOutputLength
- Maximum length of output text, by default 20. 
- doSample
- Whether or not to use sampling; use greedy decoding otherwise, by default False. 
- temperature
- The value used to module the next token probabilities, by default 1.0. 
- topK
- The number of highest probability vocabulary tokens to keep for top-k-filtering, by default 50. 
- beamSize
- The number of beam size for beam search, by default 1. 
- topP
- Top cumulative probability for vocabulary tokens, by default 1.0. - If set to float < 1, only the most probable tokens with probabilities that add up to - topPor higher are kept for generation.
- repetitionPenalty
- The parameter for repetition penalty. 1.0 means no penalty, by default 1.0. 
- noRepeatNgramSize
- If set to int > 0, all ngrams of that size can only occur once, by default 0. 
- ignoreTokenIds
- A list of token ids which are ignored in the decoder’s output, by default []. 
- useCache
- Whether or not to use cache, by default False. 
- Notes
- —–
- This is a very computationally expensive module especially on larger
- sequence. The use of an accelerator such as GPU is recommended.
 
 - References - `Denoising Sequence-to-Sequence Pre-training for Natural Language Generation, Translation, and Comprehension 
 - Paper Abstract: We present BART, a denoising autoencoder for pretraining sequence-to-sequence models. BART is trained by (1) corrupting text with an arbitrary noising function, and (2) learning a model to reconstruct the original text. It uses a standard Tranformer-based neural machine translation architecture which, despite its simplicity, can be seen as generalizing BERT (due to the bidirectional encoder), GPT (with the left-to-right decoder), and many other more recent pretraining schemes. We evaluate a number of noising approaches, finding the best performance by both randomly shuffling the order of the original sentences and using a novel in-filling scheme, where spans of text are replaced with a single mask token. BART is particularly effective when fine tuned for text generation but also works well for comprehension tasks. It matches the performance of RoBERTa with comparable training resources on GLUE and SQuAD, achieves new state-of-the-art results on a range of abstractive dialogue, question answering, and summarization tasks, with gains of up to 6 ROUGE. BART also provides a 1.1 BLEU increase over a back-translation system for machine translation, with only target language pretraining. We also report ablation experiments that replicate other pretraining schemes within the BART framework, to better measure which factors most influence end-task performance. - Examples - >>> import sparknlp >>> from sparknlp.base import * >>> from sparknlp.annotator import * >>> from pyspark.ml import Pipeline >>> documentAssembler = DocumentAssembler() \ ... .setInputCol("text") \ ... .setOutputCol("documents") >>> bart = BartTransformer.pretrained("distilbart_xsum_12_6") \ ... .setTask("summarize:") \ ... .setInputCols(["documents"]) \ ... .setMaxOutputLength(200) \ ... .setOutputCol("summaries") >>> pipeline = Pipeline().setStages([documentAssembler, bart]) >>> data = spark.createDataFrame([[ ... "Transfer learning, where a model is first pre-trained on a data-rich task before being fine-tuned on a " + ... "downstream task, has emerged as a powerful technique in natural language processing (NLP). The effectiveness" + ... " of transfer learning has given rise to a diversity of approaches, methodology, and practice. In this " + ... "paper, we explore the landscape of transfer learning techniques for NLP by introducing a unified framework " + ... "that converts all text-based language problems into a text-to-text format. Our systematic study compares " + ... "pre-training objectives, architectures, unlabeled data sets, transfer approaches, and other factors on dozens " + ... "of language understanding tasks. By combining the insights from our exploration with scale and our new " + ... "Colossal Clean Crawled Corpus, we achieve state-of-the-art results on many benchmarks covering " + ... "summarization, question answering, text classification, and more. To facilitate future work on transfer " + ... "learning for NLP, we release our data set, pre-trained models, and code." ... ]]).toDF("text") >>> result = pipeline.fit(data).transform(data) >>> result.select("summaries.result").show(truncate=False) +--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ |result | +--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ |[transfer learning has emerged as a powerful technique in natural language processing (NLP) the effectiveness of transfer learning has given rise to a diversity of approaches, methodologies, and practice .]| +--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ - setIgnoreTokenIds(value)[source]#
- A list of token ids which are ignored in the decoder’s output, by default []. - Parameters:
- valueList[int]
- The words to be filtered out 
 
 
 - setConfigProtoBytes(b)[source]#
- Sets configProto from tensorflow, serialized into byte array. - Parameters:
- bList[int]
- ConfigProto from tensorflow, serialized into byte array 
 
 
 - setTask(value)[source]#
- Sets the transformer’s task, e.g. - summarize:, by default “”.- Parameters:
- valuestr
- The transformer’s task 
 
 
 - setMinOutputLength(value)[source]#
- Sets minimum length of the sequence to be generated, by default 0. - Parameters:
- valueint
- Minimum length of the sequence to be generated 
 
 
 - setMaxOutputLength(value)[source]#
- Sets maximum length of output text, by default 20. - Parameters:
- valueint
- Maximum length of output text 
 
 
 - setDoSample(value)[source]#
- Sets whether or not to use sampling, use greedy decoding otherwise, by default False. - Parameters:
- valuebool
- Whether or not to use sampling; use greedy decoding otherwise 
 
 
 - setTemperature(value)[source]#
- Sets the value used to module the next token probabilities, by default 1.0. - Parameters:
- valuefloat
- The value used to module the next token probabilities 
 
 
 - setTopK(value)[source]#
- Sets the number of highest probability vocabulary tokens to keep for top-k-filtering, by default 50. - Parameters:
- valueint
- Number of highest probability vocabulary tokens to keep 
 
 
 - setTopP(value)[source]#
- Sets the top cumulative probability for vocabulary tokens, by default 1.0. - If set to float < 1, only the most probable tokens with probabilities that add up to - topPor higher are kept for generation.- Parameters:
- valuefloat
- Cumulative probability for vocabulary tokens 
 
 
 - setRepetitionPenalty(value)[source]#
- Sets the parameter for repetition penalty. 1.0 means no penalty, by default 1.0. - Parameters:
- valuefloat
- The repetition penalty 
 
 - References - See Ctrl: A Conditional Transformer Language Model For Controllable Generation for more details. 
 - setNoRepeatNgramSize(value)[source]#
- Sets size of n-grams that can only occur once, by default 0. - If set to int > 0, all ngrams of that size can only occur once. - Parameters:
- valueint
- N-gram size can only occur once 
 
 
 - setBeamSize(value)[source]#
- Sets the number of beam size for beam search, by default 4. - Parameters:
- valueint
- Number of beam size for beam search 
 
 
 - setCache(value)[source]#
- Sets whether or not to use caching to enhance performance, by default False. - Parameters:
- valuebool
- Whether or not to use caching to enhance performance 
 
 
 - static loadSavedModel(folder, spark_session, use_cache=False)[source]#
- Loads a locally saved model. - Parameters:
- folderstr
- Folder of the saved model 
- spark_sessionpyspark.sql.SparkSession
- The current SparkSession 
- use_cache: bool
- The model uses caching to facilitate performance 
 
- Returns:
- BartTransformer
- The restored model 
 
 
 - static pretrained(name='distilbart_xsum_12_6', lang='en', remote_loc=None)[source]#
- Downloads and loads a pretrained model. - Parameters:
- namestr, optional
- Name of the pretrained model, by default “distilbart_xsum_12_6” 
- langstr, optional
- Language of the pretrained model, by default “en” 
- remote_locstr, optional
- Optional remote address of the resource, by default None. Will use Spark NLPs repositories otherwise. 
 
- Returns:
- BartTransformer
- The restored model