sparknlp.annotator.classifier_dl.multi_classifier_dl
#
Contains classes for MultiClassifierDL.
Module Contents#
Classes#
Trains a MultiClassifierDL for Multi-label Text Classification. |
|
MultiClassifierDL for Multi-label Text Classification. |
- class MultiClassifierDLApproach[source]#
Trains a MultiClassifierDL for Multi-label Text Classification.
MultiClassifierDL uses a Bidirectional GRU with a convolutional model that we have built inside TensorFlow and supports up to 100 classes.
In machine learning, multi-label classification and the strongly related problem of multi-output classification are variants of the classification problem where multiple labels may be assigned to each instance. Multi-label classification is a generalization of multiclass classification, which is the single-label problem of categorizing instances into precisely one of more than two classes; in the multi-label problem there is no constraint on how many of the classes the instance can be assigned to. Formally, multi-label classification is the problem of finding a model that maps inputs x to binary vectors y (assigning a value of 0 or 1 for each element (label) in y).
For instantiated/pretrained models, see
MultiClassifierDLModel
.The input to MultiClassifierDL are Sentence Embeddings such as the state-of-the-art
UniversalSentenceEncoder
,BertSentenceEmbeddings
,SentenceEmbeddings
or other sentence embeddings.Setting a test dataset to monitor model metrics can be done with
.setTestDataset
. The method expects a path to a parquet file containing a dataframe that has the same required columns as the training dataframe. The pre-processing steps for the training dataframe should also be applied to the test dataframe. The following example will show how to create the test dataset:>>> documentAssembler = DocumentAssembler() \ ... .setInputCol("text") \ ... .setOutputCol("document") >>> embeddings = UniversalSentenceEncoder.pretrained() \ ... .setInputCols(["document"]) \ ... .setOutputCol("sentence_embeddings") >>> preProcessingPipeline = Pipeline().setStages([documentAssembler, embeddings]) >>> (train, test) = data.randomSplit([0.8, 0.2]) >>> preProcessingPipeline \ ... .fit(test) \ ... .transform(test) ... .write \ ... .mode("overwrite") \ ... .parquet("test_data") >>> multiClassifier = MultiClassifierDLApproach() \ ... .setInputCols(["sentence_embeddings"]) \ ... .setOutputCol("category") \ ... .setLabelColumn("label") \ ... .setTestDataset("test_data")
For extended examples of usage, see the Examples.
Input Annotation types
Output Annotation type
SENTENCE_EMBEDDINGS
CATEGORY
- Parameters:
- batchSize
Batch size, by default 64
- configProtoBytes
ConfigProto from tensorflow, serialized into byte array.
- enableOutputLogs
Whether to use stdout in addition to Spark logs, by default False
- enableOutputLogs
Whether to use stdout in addition to Spark logs.
- evaluationLogExtended
Whether logs for validation to be extended: it displays time and evaluation of each label. Default is False.
- labelColumn
Column with label per each token
- lr
Learning Rate, by default 0.001
- maxEpochs
Maximum number of epochs to train, by default 10
- outputLogsPath
Folder path to save training logs
- randomSeed
Random seed, by default 44
- shufflePerEpoch
whether to shuffle the training data on each Epoch, by default False
- testDataset
Path to test dataset. If set used to calculate statistic on it during training.
- threshold
The minimum threshold for each label to be accepted, by default 0.5
- validationSplit
Choose the proportion of training dataset to be validated against the model on each Epoch. The value should be between 0.0 and 1.0 and by default it is 0.0 and off, by default 0.0
- verbose
Level of verbosity during training
See also
ClassifierDLApproach
for single-class classification
SentimentDLApproach
for sentiment analysis
Notes
This annotator requires an array of labels in type of String.
UniversalSentenceEncoder, BertSentenceEmbeddings, SentenceEmbeddings or other sentence embeddings can be used for the
inputCol
.
Examples
>>> import sparknlp >>> from sparknlp.base import * >>> from sparknlp.annotator import * >>> from pyspark.ml import Pipeline
In this example, the training data has the form:
+----------------+--------------------+--------------------+ | id| text| labels| +----------------+--------------------+--------------------+ |ed58abb40640f983|PN NewsYou mean ... | [toxic]| |a1237f726b5f5d89|Dude. Place the ...| [obscene, insult]| |24b0d6c8733c2abe|Thanks - thanks ...| [insult]| |8c4478fb239bcfc0|" Gee, 5 minutes ...|[toxic, obscene, ...| +----------------+--------------------+--------------------+
Process training data to create text with associated array of labels:
>>> trainDataset.printSchema() root |-- id: string (nullable = true) |-- text: string (nullable = true) |-- labels: array (nullable = true) | |-- element: string (containsNull = true)
Then create pipeline for training:
>>> documentAssembler = DocumentAssembler() \ ... .setInputCol("text") \ ... .setOutputCol("document") \ ... .setCleanupMode("shrink") >>> embeddings = UniversalSentenceEncoder.pretrained() \ ... .setInputCols("document") \ ... .setOutputCol("embeddings") >>> docClassifier = MultiClassifierDLApproach() \ ... .setInputCols("embeddings") \ ... .setOutputCol("category") \ ... .setLabelColumn("labels") \ ... .setBatchSize(128) \ ... .setMaxEpochs(10) \ ... .setLr(1e-3) \ ... .setThreshold(0.5) \ ... .setValidationSplit(0.1) >>> pipeline = Pipeline().setStages([ ... documentAssembler, ... embeddings, ... docClassifier ... ]) >>> pipelineModel = pipeline.fit(trainDataset)
- class MultiClassifierDLModel(classname='com.johnsnowlabs.nlp.annotators.classifier.dl.MultiClassifierDLModel', java_model=None)[source]#
MultiClassifierDL for Multi-label Text Classification.
MultiClassifierDL Bidirectional GRU with Convolution model we have built inside TensorFlow and supports up to 100 classes.
In machine learning, multi-label classification and the strongly related problem of multi-output classification are variants of the classification problem where multiple labels may be assigned to each instance. Multi-label classification is a generalization of multiclass classification, which is the single-label problem of categorizing instances into precisely one of more than two classes; in the multi-label problem there is no constraint on how many of the classes the instance can be assigned to. Formally, multi-label classification is the problem of finding a model that maps inputs x to binary vectors y (assigning a value of 0 or 1 for each element (label) in y).
The input to
MultiClassifierDL
are Sentence Embeddings such as the state-of-the-artUniversalSentenceEncoder
,BertSentenceEmbeddings
,SentenceEmbeddings
or other sentence embeddings.This is the instantiated model of the
MultiClassifierDLApproach
. For training your own model, please see the documentation of that class.Pretrained models can be loaded with
pretrained()
of the companion object:>>> multiClassifier = MultiClassifierDLModel.pretrained() \ >>> .setInputCols(["sentence_embeddings"]) \ >>> .setOutputCol("categories")
The default model is
"multiclassifierdl_use_toxic"
, if no name is provided. It uses embeddings from the UniversalSentenceEncoder and classifies toxic comments.The data is based on the Jigsaw Toxic Comment Classification Challenge. For available pretrained models please see the Models Hub.
For extended examples of usage, see the Examples.
Input Annotation types
Output Annotation type
SENTENCE_EMBEDDINGS
CATEGORY
- Parameters:
- configProtoBytes
ConfigProto from tensorflow, serialized into byte array.
- threshold
The minimum threshold for each label to be accepted, by default 0.5
- classes
Get the tags used to trained this MultiClassifierDLModel
See also
ClassifierDLModel
for single-class classification
SentimentDLModel
for sentiment analysis
Examples
>>> import sparknlp >>> from sparknlp.base import * >>> from sparknlp.annotator import * >>> from pyspark.ml import Pipeline >>> documentAssembler = DocumentAssembler() \ ... .setInputCol("text") \ ... .setOutputCol("document") >>> useEmbeddings = UniversalSentenceEncoder.pretrained() \ ... .setInputCols("document") \ ... .setOutputCol("sentence_embeddings") >>> multiClassifierDl = MultiClassifierDLModel.pretrained() \ ... .setInputCols("sentence_embeddings") \ ... .setOutputCol("classifications") >>> pipeline = Pipeline() \ ... .setStages([ ... documentAssembler, ... useEmbeddings, ... multiClassifierDl ... ]) >>> data = spark.createDataFrame([ ... ["This is pretty good stuff!"], ... ["Wtf kind of crap is this"] ... ]).toDF("text") >>> result = pipeline.fit(data).transform(data) >>> result.select("text", "classifications.result").show(truncate=False) +--------------------------+----------------+ |text |result | +--------------------------+----------------+ |This is pretty good stuff!|[] | |Wtf kind of crap is this |[toxic, obscene]| +--------------------------+----------------+
- setThreshold(v)[source]#
Sets minimum threshold for each label to be accepted, by default 0.5.
- Parameters:
- vfloat
The minimum threshold for each label to be accepted, by default 0.5
- setConfigProtoBytes(b)[source]#
Sets configProto from tensorflow, serialized into byte array.
- Parameters:
- bList[int]
ConfigProto from tensorflow, serialized into byte array
- static pretrained(name='multiclassifierdl_use_toxic', lang='en', remote_loc=None)[source]#
Downloads and loads a pretrained model.
- Parameters:
- namestr, optional
Name of the pretrained model, by default “multiclassifierdl_use_toxic”
- langstr, optional
Language of the pretrained model, by default “en”
- remote_locstr, optional
Optional remote address of the resource, by default None. Will use Spark NLPs repositories otherwise.
- Returns:
- MultiClassifierDLModel
The restored model