class T5Transformer extends AnnotatorModel[T5Transformer] with HasBatchedAnnotate[T5Transformer] with ParamsAndFeaturesWritable with WriteTensorflowModel with WriteOnnxModel with WriteOpenvinoModel with HasCaseSensitiveProperties with WriteSentencePieceModel with HasProtectedParams with HasEngine
T5: the Text-To-Text Transfer Transformer
T5 reconsiders all NLP tasks into a unified text-to-text-format where the input and output are always text strings, in contrast to BERT-style models that can only output either a class label or a span of the input. The text-to-text framework is able to use the same model, loss function, and hyper-parameters on any NLP task, including machine translation, document summarization, question answering, and classification tasks (e.g., sentiment analysis). T5 can even apply to regression tasks by training it to predict the string representation of a number instead of the number itself.
Pretrained models can be loaded with pretrained
of the companion object:
val t5 = T5Transformer.pretrained() .setTask("summarize:") .setInputCols("document") .setOutputCol("summaries")
The default model is "t5_small"
, if no name is provided. For available pretrained models
please see the Models Hub.
For extended examples of usage, see the Examples and the T5TestSpec.
References:
- Exploring Transfer Learning with T5: the Text-To-Text Transfer Transformer
- Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer
- https://github.com/google-research/text-to-text-transfer-transformer
Paper Abstract:
Transfer learning, where a model is first pre-trained on a data-rich task before being fine-tuned on a downstream task, has emerged as a powerful technique in natural language processing (NLP). The effectiveness of transfer learning has given rise to a diversity of approaches, methodology, and practice. In this paper, we explore the landscape of transfer learning techniques for NLP by introducing a unified framework that converts all text-based language problems into a text-to-text format. Our systematic study compares pre-training objectives, architectures, unlabeled data sets, transfer approaches, and other factors on dozens of language understanding tasks. By combining the insights from our exploration with scale and our new Colossal Clean Crawled Corpus, we achieve state-of-the-art results on many benchmarks covering summarization, question answering, text classification, and more. To facilitate future work on transfer learning for NLP, we release our data set, pre-trained models, and code.
Note:
This is a very computationally expensive module especially on larger sequence. The use of an accelerator such as GPU is recommended.
Example
import spark.implicits._ import com.johnsnowlabs.nlp.base.DocumentAssembler import com.johnsnowlabs.nlp.annotators.seq2seq.T5Transformer import org.apache.spark.ml.Pipeline val documentAssembler = new DocumentAssembler() .setInputCol("text") .setOutputCol("documents") val t5 = T5Transformer.pretrained("t5_small") .setTask("summarize:") .setInputCols(Array("documents")) .setMaxOutputLength(200) .setOutputCol("summaries") val pipeline = new Pipeline().setStages(Array(documentAssembler, t5)) val data = Seq( "Transfer learning, where a model is first pre-trained on a data-rich task before being fine-tuned on a " + "downstream task, has emerged as a powerful technique in natural language processing (NLP). The effectiveness" + " of transfer learning has given rise to a diversity of approaches, methodology, and practice. In this " + "paper, we explore the landscape of transfer learning techniques for NLP by introducing a unified framework " + "that converts all text-based language problems into a text-to-text format. Our systematic study compares " + "pre-training objectives, architectures, unlabeled data sets, transfer approaches, and other factors on dozens " + "of language understanding tasks. By combining the insights from our exploration with scale and our new " + "Colossal Clean Crawled Corpus, we achieve state-of-the-art results on many benchmarks covering " + "summarization, question answering, text classification, and more. To facilitate future work on transfer " + "learning for NLP, we release our data set, pre-trained models, and code." ).toDF("text") val result = pipeline.fit(data).transform(data) result.select("summaries.result").show(false) +--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ |result | +--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ |[transfer learning has emerged as a powerful technique in natural language processing (NLP) the effectiveness of transfer learning has given rise to a diversity of approaches, methodologies, and practice .]| +--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
- Grouped
- Alphabetic
- By Inheritance
- T5Transformer
- HasEngine
- HasProtectedParams
- WriteSentencePieceModel
- HasCaseSensitiveProperties
- WriteOpenvinoModel
- WriteOnnxModel
- WriteTensorflowModel
- HasBatchedAnnotate
- AnnotatorModel
- CanBeLazy
- RawAnnotator
- HasOutputAnnotationCol
- HasInputAnnotationCols
- HasOutputAnnotatorType
- ParamsAndFeaturesWritable
- HasFeatures
- DefaultParamsWritable
- MLWritable
- Model
- Transformer
- PipelineStage
- Logging
- Params
- Serializable
- Serializable
- Identifiable
- AnyRef
- Any
- Hide All
- Show All
- Public
- All
Instance Constructors
Type Members
-
implicit
class
ProtectedParam[T] extends Param[T]
- Definition Classes
- HasProtectedParams
-
type
AnnotationContent = Seq[Row]
internal types to show Rows as a relevant StructType Should be deleted once Spark releases UserDefinedTypes to @developerAPI
internal types to show Rows as a relevant StructType Should be deleted once Spark releases UserDefinedTypes to @developerAPI
- Attributes
- protected
- Definition Classes
- AnnotatorModel
-
type
AnnotatorType = String
- Definition Classes
- HasOutputAnnotatorType
Value Members
-
final
def
!=(arg0: Any): Boolean
- Definition Classes
- AnyRef → Any
-
final
def
##(): Int
- Definition Classes
- AnyRef → Any
-
final
def
$[T](param: Param[T]): T
- Attributes
- protected
- Definition Classes
- Params
-
def
$$[T](feature: StructFeature[T]): T
- Attributes
- protected
- Definition Classes
- HasFeatures
-
def
$$[K, V](feature: MapFeature[K, V]): Map[K, V]
- Attributes
- protected
- Definition Classes
- HasFeatures
-
def
$$[T](feature: SetFeature[T]): Set[T]
- Attributes
- protected
- Definition Classes
- HasFeatures
-
def
$$[T](feature: ArrayFeature[T]): Array[T]
- Attributes
- protected
- Definition Classes
- HasFeatures
-
final
def
==(arg0: Any): Boolean
- Definition Classes
- AnyRef → Any
-
def
_transform(dataset: Dataset[_], recursivePipeline: Option[PipelineModel]): DataFrame
- Attributes
- protected
- Definition Classes
- AnnotatorModel
-
def
afterAnnotate(dataset: DataFrame): DataFrame
- Attributes
- protected
- Definition Classes
- AnnotatorModel
-
final
def
asInstanceOf[T0]: T0
- Definition Classes
- Any
-
def
batchAnnotate(batchedAnnotations: Seq[Array[Annotation]]): Seq[Seq[Annotation]]
takes a document and annotations and produces new annotations of this annotator's annotation type
takes a document and annotations and produces new annotations of this annotator's annotation type
- batchedAnnotations
Annotations in batches that correspond to inputAnnotationCols generated by previous annotators if any
- returns
any number of annotations processed for every batch of input annotations. Not necessary one to one relationship IMPORTANT: !MUST! return sequences of equal lengths !! IMPORTANT: !MUST! return sentences that belong to the same original row !! (challenging)
- Definition Classes
- T5Transformer → HasBatchedAnnotate
-
def
batchProcess(rows: Iterator[_]): Iterator[Row]
- Definition Classes
- HasBatchedAnnotate
-
val
batchSize: IntParam
Size of every batch (Default depends on model).
Size of every batch (Default depends on model).
- Definition Classes
- HasBatchedAnnotate
-
def
beforeAnnotate(dataset: Dataset[_]): Dataset[_]
- Attributes
- protected
- Definition Classes
- AnnotatorModel
-
val
caseSensitive: BooleanParam
Whether to ignore case in index lookups (Default depends on model)
Whether to ignore case in index lookups (Default depends on model)
- Definition Classes
- HasCaseSensitiveProperties
-
final
def
checkSchema(schema: StructType, inputAnnotatorType: String): Boolean
- Attributes
- protected
- Definition Classes
- HasInputAnnotationCols
-
final
def
clear(param: Param[_]): T5Transformer.this.type
- Definition Classes
- Params
-
def
clone(): AnyRef
- Attributes
- protected[lang]
- Definition Classes
- AnyRef
- Annotations
- @throws( ... ) @native()
-
val
configProtoBytes: IntArrayParam
ConfigProto from tensorflow, serialized into byte array.
ConfigProto from tensorflow, serialized into byte array. Get with config_proto.SerializeToString()
-
def
copy(extra: ParamMap): T5Transformer
requirement for annotators copies
requirement for annotators copies
- Definition Classes
- RawAnnotator → Model → Transformer → PipelineStage → Params
-
def
copyValues[T <: Params](to: T, extra: ParamMap): T
- Attributes
- protected
- Definition Classes
- Params
-
final
def
defaultCopy[T <: Params](extra: ParamMap): T
- Attributes
- protected
- Definition Classes
- Params
-
val
doSample: BooleanParam
Whether or not to use sampling, use greedy decoding otherwise (Default:
false
) -
val
engine: Param[String]
This param is set internally once via loadSavedModel.
This param is set internally once via loadSavedModel. That's why there is no setter
- Definition Classes
- HasEngine
-
final
def
eq(arg0: AnyRef): Boolean
- Definition Classes
- AnyRef
-
def
equals(arg0: Any): Boolean
- Definition Classes
- AnyRef → Any
-
def
explainParam(param: Param[_]): String
- Definition Classes
- Params
-
def
explainParams(): String
- Definition Classes
- Params
-
def
extraValidate(structType: StructType): Boolean
- Attributes
- protected
- Definition Classes
- RawAnnotator
-
def
extraValidateMsg: String
Override for additional custom schema checks
Override for additional custom schema checks
- Attributes
- protected
- Definition Classes
- RawAnnotator
-
final
def
extractParamMap(): ParamMap
- Definition Classes
- Params
-
final
def
extractParamMap(extra: ParamMap): ParamMap
- Definition Classes
- Params
-
val
features: ArrayBuffer[Feature[_, _, _]]
- Definition Classes
- HasFeatures
-
def
finalize(): Unit
- Attributes
- protected[lang]
- Definition Classes
- AnyRef
- Annotations
- @throws( classOf[java.lang.Throwable] )
-
def
get[T](feature: StructFeature[T]): Option[T]
- Attributes
- protected
- Definition Classes
- HasFeatures
-
def
get[K, V](feature: MapFeature[K, V]): Option[Map[K, V]]
- Attributes
- protected
- Definition Classes
- HasFeatures
-
def
get[T](feature: SetFeature[T]): Option[Set[T]]
- Attributes
- protected
- Definition Classes
- HasFeatures
-
def
get[T](feature: ArrayFeature[T]): Option[Array[T]]
- Attributes
- protected
- Definition Classes
- HasFeatures
-
final
def
get[T](param: Param[T]): Option[T]
- Definition Classes
- Params
-
def
getBatchSize: Int
Size of every batch.
Size of every batch.
- Definition Classes
- HasBatchedAnnotate
-
def
getCaseSensitive: Boolean
- Definition Classes
- HasCaseSensitiveProperties
-
final
def
getClass(): Class[_]
- Definition Classes
- AnyRef → Any
- Annotations
- @native()
- def getConfigProtoBytes: Option[Array[Byte]]
-
final
def
getDefault[T](param: Param[T]): Option[T]
- Definition Classes
- Params
- def getDoSample: Boolean
-
def
getEngine: String
- Definition Classes
- HasEngine
- def getIgnoreTokenIds: Array[Int]
-
def
getInputCols: Array[String]
- returns
input annotations columns currently used
- Definition Classes
- HasInputAnnotationCols
-
def
getLazyAnnotator: Boolean
- Definition Classes
- CanBeLazy
- def getMaxNewTokens: Int
- def getMaxOutputLength: Int
- def getMinOutputLength: Int
- def getModelIfNotSet: T5EncoderDecoder
- def getNoRepeatNgramSize: Int
-
final
def
getOrDefault[T](param: Param[T]): T
- Definition Classes
- Params
-
final
def
getOutputCol: String
Gets annotation column name going to generate
Gets annotation column name going to generate
- Definition Classes
- HasOutputAnnotationCol
-
def
getParam(paramName: String): Param[Any]
- Definition Classes
- Params
- def getRandomSeed: Option[Long]
- def getRepetitionPenalty: Double
- def getSignatures: Option[Map[String, String]]
-
def
getStopAtEos: Boolean
Checks whether text generation stops when the end-of-sentence token is encountered.
- def getTemperature: Double
- def getTopK: Int
- def getTopP: Double
-
final
def
hasDefault[T](param: Param[T]): Boolean
- Definition Classes
- Params
-
def
hasParam(paramName: String): Boolean
- Definition Classes
- Params
-
def
hasParent: Boolean
- Definition Classes
- Model
-
def
hashCode(): Int
- Definition Classes
- AnyRef → Any
- Annotations
- @native()
-
var
ignoreTokenIds: IntArrayParam
A list of token ids which are ignored in the decoder's output
-
def
initializeLogIfNecessary(isInterpreter: Boolean, silent: Boolean): Boolean
- Attributes
- protected
- Definition Classes
- Logging
-
def
initializeLogIfNecessary(isInterpreter: Boolean): Unit
- Attributes
- protected
- Definition Classes
- Logging
-
val
inputAnnotatorTypes: Array[AnnotatorType]
Input annotator type : DOCUMENT
Input annotator type : DOCUMENT
- Definition Classes
- T5Transformer → HasInputAnnotationCols
-
final
val
inputCols: StringArrayParam
columns that contain annotations necessary to run this annotator AnnotatorType is used both as input and output columns if not specified
columns that contain annotations necessary to run this annotator AnnotatorType is used both as input and output columns if not specified
- Attributes
- protected
- Definition Classes
- HasInputAnnotationCols
-
final
def
isDefined(param: Param[_]): Boolean
- Definition Classes
- Params
-
final
def
isInstanceOf[T0]: Boolean
- Definition Classes
- Any
-
final
def
isSet(param: Param[_]): Boolean
- Definition Classes
- Params
-
def
isTraceEnabled(): Boolean
- Attributes
- protected
- Definition Classes
- Logging
-
val
lazyAnnotator: BooleanParam
- Definition Classes
- CanBeLazy
-
def
log: Logger
- Attributes
- protected
- Definition Classes
- Logging
-
def
logDebug(msg: ⇒ String, throwable: Throwable): Unit
- Attributes
- protected
- Definition Classes
- Logging
-
def
logDebug(msg: ⇒ String): Unit
- Attributes
- protected
- Definition Classes
- Logging
-
def
logError(msg: ⇒ String, throwable: Throwable): Unit
- Attributes
- protected
- Definition Classes
- Logging
-
def
logError(msg: ⇒ String): Unit
- Attributes
- protected
- Definition Classes
- Logging
-
def
logInfo(msg: ⇒ String, throwable: Throwable): Unit
- Attributes
- protected
- Definition Classes
- Logging
-
def
logInfo(msg: ⇒ String): Unit
- Attributes
- protected
- Definition Classes
- Logging
-
def
logName: String
- Attributes
- protected
- Definition Classes
- Logging
-
def
logTrace(msg: ⇒ String, throwable: Throwable): Unit
- Attributes
- protected
- Definition Classes
- Logging
-
def
logTrace(msg: ⇒ String): Unit
- Attributes
- protected
- Definition Classes
- Logging
-
def
logWarning(msg: ⇒ String, throwable: Throwable): Unit
- Attributes
- protected
- Definition Classes
- Logging
-
def
logWarning(msg: ⇒ String): Unit
- Attributes
- protected
- Definition Classes
- Logging
-
val
maxNewTokens: IntParam
Maximum number of new tokens to be generated (Default:
20
) -
val
maxOutputLength: IntParam
Maximum length of the sequence to be generated (Default:
20
) -
val
minOutputLength: IntParam
Minimum length of the sequence to be generated (Default:
0
) -
def
msgHelper(schema: StructType): String
- Attributes
- protected
- Definition Classes
- HasInputAnnotationCols
-
final
def
ne(arg0: AnyRef): Boolean
- Definition Classes
- AnyRef
-
val
noRepeatNgramSize: IntParam
If set to int >
0
, all ngrams of that size can only occur once (Default:0
) -
final
def
notify(): Unit
- Definition Classes
- AnyRef
- Annotations
- @native()
-
final
def
notifyAll(): Unit
- Definition Classes
- AnyRef
- Annotations
- @native()
-
def
onWrite(path: String, spark: SparkSession): Unit
- Definition Classes
- T5Transformer → ParamsAndFeaturesWritable
-
val
optionalInputAnnotatorTypes: Array[String]
- Definition Classes
- HasInputAnnotationCols
-
val
outputAnnotatorType: String
Output annotator type : DOCUMENT
Output annotator type : DOCUMENT
- Definition Classes
- T5Transformer → HasOutputAnnotatorType
-
final
val
outputCol: Param[String]
- Attributes
- protected
- Definition Classes
- HasOutputAnnotationCol
-
lazy val
params: Array[Param[_]]
- Definition Classes
- Params
-
var
parent: Estimator[T5Transformer]
- Definition Classes
- Model
-
var
randomSeed: Option[Long]
Optional Random seed for the model.
Optional Random seed for the model. Needs to be of type
Long
. -
val
repetitionPenalty: DoubleParam
The parameter for repetition penalty (Default:
1.0
).The parameter for repetition penalty (Default:
1.0
).1.0
means no penalty. See this paper for more details. -
def
save(path: String): Unit
- Definition Classes
- MLWritable
- Annotations
- @Since( "1.6.0" ) @throws( ... )
-
def
set[T](param: ProtectedParam[T], value: T): T5Transformer.this.type
Sets the value for a protected Param.
Sets the value for a protected Param.
If the parameter was already set, it will not be set again. Default values do not count as a set value and can be overridden.
- T
Type of the parameter
- param
Protected parameter to set
- value
Value for the parameter
- returns
This object
- Definition Classes
- HasProtectedParams
-
def
set[T](feature: StructFeature[T], value: T): T5Transformer.this.type
- Attributes
- protected
- Definition Classes
- HasFeatures
-
def
set[K, V](feature: MapFeature[K, V], value: Map[K, V]): T5Transformer.this.type
- Attributes
- protected
- Definition Classes
- HasFeatures
-
def
set[T](feature: SetFeature[T], value: Set[T]): T5Transformer.this.type
- Attributes
- protected
- Definition Classes
- HasFeatures
-
def
set[T](feature: ArrayFeature[T], value: Array[T]): T5Transformer.this.type
- Attributes
- protected
- Definition Classes
- HasFeatures
-
final
def
set(paramPair: ParamPair[_]): T5Transformer.this.type
- Attributes
- protected
- Definition Classes
- Params
-
final
def
set(param: String, value: Any): T5Transformer.this.type
- Attributes
- protected
- Definition Classes
- Params
-
final
def
set[T](param: Param[T], value: T): T5Transformer.this.type
- Definition Classes
- Params
-
def
setBatchSize(size: Int): T5Transformer.this.type
Size of every batch.
Size of every batch.
- Definition Classes
- HasBatchedAnnotate
-
def
setCaseSensitive(value: Boolean): T5Transformer.this.type
- Definition Classes
- HasCaseSensitiveProperties
- def setConfigProtoBytes(bytes: Array[Int]): T5Transformer.this.type
-
def
setDefault[T](feature: StructFeature[T], value: () ⇒ T): T5Transformer.this.type
- Attributes
- protected
- Definition Classes
- HasFeatures
-
def
setDefault[K, V](feature: MapFeature[K, V], value: () ⇒ Map[K, V]): T5Transformer.this.type
- Attributes
- protected
- Definition Classes
- HasFeatures
-
def
setDefault[T](feature: SetFeature[T], value: () ⇒ Set[T]): T5Transformer.this.type
- Attributes
- protected
- Definition Classes
- HasFeatures
-
def
setDefault[T](feature: ArrayFeature[T], value: () ⇒ Array[T]): T5Transformer.this.type
- Attributes
- protected
- Definition Classes
- HasFeatures
-
final
def
setDefault(paramPairs: ParamPair[_]*): T5Transformer.this.type
- Attributes
- protected
- Definition Classes
- Params
-
final
def
setDefault[T](param: Param[T], value: T): T5Transformer.this.type
- Attributes
- protected[org.apache.spark.ml]
- Definition Classes
- Params
- def setDoSample(value: Boolean): T5Transformer.this.type
- def setIgnoreTokenIds(tokenIds: Array[Int]): T5Transformer.this.type
-
final
def
setInputCols(value: String*): T5Transformer.this.type
- Definition Classes
- HasInputAnnotationCols
-
def
setInputCols(value: Array[String]): T5Transformer.this.type
Overrides required annotators column if different than default
Overrides required annotators column if different than default
- Definition Classes
- HasInputAnnotationCols
-
def
setLazyAnnotator(value: Boolean): T5Transformer.this.type
- Definition Classes
- CanBeLazy
- def setMaxNewTokens(value: Int): T5Transformer.this.type
- def setMaxOutputLength(value: Int): T5Transformer.this.type
- def setMinOutputLength(value: Int): T5Transformer.this.type
- def setModelIfNotSet(spark: SparkSession, openvinoWrapper: EncoderDecoderWrappers, spp: SentencePieceWrapper): T5Transformer.this.type
- def setModelIfNotSet(spark: SparkSession, encoder: OnnxWrapper, decoder: OnnxWrapper, spp: SentencePieceWrapper): T5Transformer.this.type
- def setModelIfNotSet(spark: SparkSession, tfWrapper: TensorflowWrapper, spp: SentencePieceWrapper, useCache: Boolean): T5Transformer.this.type
- def setNoRepeatNgramSize(value: Int): T5Transformer.this.type
-
final
def
setOutputCol(value: String): T5Transformer.this.type
Overrides annotation column name when transforming
Overrides annotation column name when transforming
- Definition Classes
- HasOutputAnnotationCol
-
def
setParent(parent: Estimator[T5Transformer]): T5Transformer
- Definition Classes
- Model
- def setRandomSeed(value: Long): T5Transformer.this.type
- def setRepetitionPenalty(value: Double): T5Transformer.this.type
- def setSignatures(value: Map[String, String]): T5Transformer.this.type
-
def
setStopAtEos(value: Boolean): T5Transformer.this.type
Determines whether text generation stops when the end-of-sentence token is encountered.
- def setTask(value: String): T5Transformer.this.type
- def setTemperature(value: Double): T5Transformer.this.type
- def setTopK(value: Int): T5Transformer.this.type
- def setTopP(value: Double): T5Transformer.this.type
-
val
signatures: MapFeature[String, String]
It contains TF model signatures for the laded saved model
-
val
stopAtEos: BooleanParam
Stop text generation when the end-of-sentence token is encountered.
-
final
def
synchronized[T0](arg0: ⇒ T0): T0
- Definition Classes
- AnyRef
-
val
task: Param[String]
Set transformer task, e.g.
Set transformer task, e.g.
"summarize:"
(Default:""
). The T5 task needs to be in the format"task:"
. -
val
temperature: DoubleParam
The value used to module the next token probabilities (Default:
1.0
) -
def
toString(): String
- Definition Classes
- Identifiable → AnyRef → Any
-
val
topK: IntParam
The number of highest probability vocabulary tokens to keep for top-k-filtering (Default:
50
) -
val
topP: DoubleParam
If set to float <
1.0
, only the most probable tokens with probabilities that add up totopP
or higher are kept for generation (Default:1.0
) -
final
def
transform(dataset: Dataset[_]): DataFrame
Given requirements are met, this applies ML transformation within a Pipeline or stand-alone Output annotation will be generated as a new column, previous annotations are still available separately metadata is built at schema level to record annotations structural information outside its content
Given requirements are met, this applies ML transformation within a Pipeline or stand-alone Output annotation will be generated as a new column, previous annotations are still available separately metadata is built at schema level to record annotations structural information outside its content
- dataset
Dataset[Row]
- Definition Classes
- AnnotatorModel → Transformer
-
def
transform(dataset: Dataset[_], paramMap: ParamMap): DataFrame
- Definition Classes
- Transformer
- Annotations
- @Since( "2.0.0" )
-
def
transform(dataset: Dataset[_], firstParamPair: ParamPair[_], otherParamPairs: ParamPair[_]*): DataFrame
- Definition Classes
- Transformer
- Annotations
- @Since( "2.0.0" ) @varargs()
-
final
def
transformSchema(schema: StructType): StructType
requirement for pipeline transformation validation.
requirement for pipeline transformation validation. It is called on fit()
- Definition Classes
- RawAnnotator → PipelineStage
-
def
transformSchema(schema: StructType, logging: Boolean): StructType
- Attributes
- protected
- Definition Classes
- PipelineStage
- Annotations
- @DeveloperApi()
-
val
uid: String
- Definition Classes
- T5Transformer → Identifiable
-
def
validate(schema: StructType): Boolean
takes a Dataset and checks to see if all the required annotation types are present.
takes a Dataset and checks to see if all the required annotation types are present.
- schema
to be validated
- returns
True if all the required types are present, else false
- Attributes
- protected
- Definition Classes
- RawAnnotator
-
final
def
wait(): Unit
- Definition Classes
- AnyRef
- Annotations
- @throws( ... )
-
final
def
wait(arg0: Long, arg1: Int): Unit
- Definition Classes
- AnyRef
- Annotations
- @throws( ... )
-
final
def
wait(arg0: Long): Unit
- Definition Classes
- AnyRef
- Annotations
- @throws( ... ) @native()
-
def
wrapColumnMetadata(col: Column): Column
- Attributes
- protected
- Definition Classes
- RawAnnotator
-
def
write: MLWriter
- Definition Classes
- ParamsAndFeaturesWritable → DefaultParamsWritable → MLWritable
-
def
writeOnnxModel(path: String, spark: SparkSession, onnxWrapper: OnnxWrapper, suffix: String, fileName: String): Unit
- Definition Classes
- WriteOnnxModel
-
def
writeOnnxModels(path: String, spark: SparkSession, onnxWrappersWithNames: Seq[(OnnxWrapper, String)], suffix: String): Unit
- Definition Classes
- WriteOnnxModel
-
def
writeOpenvinoModel(path: String, spark: SparkSession, openvinoWrapper: OpenvinoWrapper, suffix: String, fileName: String): Unit
- Definition Classes
- WriteOpenvinoModel
-
def
writeOpenvinoModels(path: String, spark: SparkSession, ovWrappersWithNames: Seq[(OpenvinoWrapper, String)], suffix: String): Unit
- Definition Classes
- WriteOpenvinoModel
-
def
writeSentencePieceModel(path: String, spark: SparkSession, spp: SentencePieceWrapper, suffix: String, filename: String): Unit
- Definition Classes
- WriteSentencePieceModel
-
def
writeTensorflowHub(path: String, tfPath: String, spark: SparkSession, suffix: String = "_use"): Unit
- Definition Classes
- WriteTensorflowModel
-
def
writeTensorflowModel(path: String, spark: SparkSession, tensorflow: TensorflowWrapper, suffix: String, filename: String, configProtoBytes: Option[Array[Byte]] = None): Unit
- Definition Classes
- WriteTensorflowModel
-
def
writeTensorflowModelV2(path: String, spark: SparkSession, tensorflow: TensorflowWrapper, suffix: String, filename: String, configProtoBytes: Option[Array[Byte]] = None, savedSignatures: Option[Map[String, String]] = None): Unit
- Definition Classes
- WriteTensorflowModel
Inherited from HasEngine
Inherited from HasProtectedParams
Inherited from WriteSentencePieceModel
Inherited from HasCaseSensitiveProperties
Inherited from WriteOpenvinoModel
Inherited from WriteOnnxModel
Inherited from WriteTensorflowModel
Inherited from HasBatchedAnnotate[T5Transformer]
Inherited from AnnotatorModel[T5Transformer]
Inherited from CanBeLazy
Inherited from RawAnnotator[T5Transformer]
Inherited from HasOutputAnnotationCol
Inherited from HasInputAnnotationCols
Inherited from HasOutputAnnotatorType
Inherited from ParamsAndFeaturesWritable
Inherited from HasFeatures
Inherited from DefaultParamsWritable
Inherited from MLWritable
Inherited from Model[T5Transformer]
Inherited from Transformer
Inherited from PipelineStage
Inherited from Logging
Inherited from Params
Inherited from Serializable
Inherited from Serializable
Inherited from Identifiable
Inherited from AnyRef
Inherited from Any
Parameters
A list of (hyper-)parameter keys this annotator can take. Users can set and get the parameter values through setters and getters, respectively.