sparknlp.annotator.classifier_dl.classifier_dl
#
Contains classes for ClassifierDL.
Module Contents#
Classes#
Trains a ClassifierDL for generic Multi-class Text Classification. |
|
ClassifierDL for generic Multi-class Text Classification. |
- class ClassifierDLApproach[source]#
Trains a ClassifierDL for generic Multi-class Text Classification.
ClassifierDL uses the state-of-the-art Universal Sentence Encoder as an input for text classifications. The ClassifierDL annotator uses a deep learning model (DNNs) we have built inside TensorFlow and supports up to 100 classes.
For instantiated/pretrained models, see
ClassifierDLModel
.Setting a test dataset to monitor model metrics can be done with
.setTestDataset
. The method expects a path to a parquet file containing a dataframe that has the same required columns as the training dataframe. The pre-processing steps for the training dataframe should also be applied to the test dataframe. The following example will show how to create the test dataset:>>> documentAssembler = DocumentAssembler() \ ... .setInputCol("text") \ ... .setOutputCol("document") >>> embeddings = UniversalSentenceEncoder.pretrained() \ ... .setInputCols(["document"]) \ ... .setOutputCol("sentence_embeddings") >>> preProcessingPipeline = Pipeline().setStages([documentAssembler, embeddings]) >>> (train, test) = data.randomSplit([0.8, 0.2]) >>> preProcessingPipeline \ ... .fit(test) \ ... .transform(test) ... .write \ ... .mode("overwrite") \ ... .parquet("test_data") >>> classifier = ClassifierDLApproach() \ ... .setInputCols(["sentence_embeddings"]) \ ... .setOutputCol("category") \ ... .setLabelColumn("label") \ ... .setTestDataset("test_data")
For extended examples of usage, see the Examples Examples.
Input Annotation types
Output Annotation type
SENTENCE_EMBEDDINGS
CATEGORY
- Parameters:
- batchSize
Batch size, by default 64
- configProtoBytes
ConfigProto from tensorflow, serialized into byte array.
- dropout
Dropout coefficient, by default 0.5
- enableOutputLogs
Whether to use stdout in addition to Spark logs, by default False
- evaluationLogExtended
Whether logs for validation to be extended: it displays time and evaluation of each label. Default is False.
- labelColumn
Column with label per each token
- lr
Learning Rate, by default 0.005
- maxEpochs
Maximum number of epochs to train, by default 30
- outputLogsPath
Folder path to save training logs
- randomSeed
Random seed for shuffling
- testDataset
Path to test dataset. If set used to calculate statistic on it during training.
- validationSplit
Choose the proportion of training dataset to be validated against the model on each Epoch. The value should be between 0.0 and 1.0 and by default it is 0.0 and off.
- verbose
Level of verbosity during training
See also
MultiClassifierDLApproach
for multi-class classification
SentimentDLApproach
for sentiment analysis
Notes
This annotator accepts a label column of a single item in either type of String, Int, Float, or Double.
UniversalSentenceEncoder, Transformer based embeddings, or SentenceEmbeddings can be used for the
inputCol
.
Examples
>>> import sparknlp >>> from sparknlp.base import * >>> from sparknlp.annotator import * >>> from pyspark.ml import Pipeline
In this example, the training data
"sentiment.csv"
has the form of:text,label This movie is the best movie I have wached ever! In my opinion this movie can win an award.,0 This was a terrible movie! The acting was bad really bad!,1 ...
Then traning can be done like so:
>>> smallCorpus = spark.read.option("header","True").csv("src/test/resources/classifier/sentiment.csv") >>> documentAssembler = DocumentAssembler() \ ... .setInputCol("text") \ ... .setOutputCol("document") >>> useEmbeddings = UniversalSentenceEncoder.pretrained() \ ... .setInputCols("document") \ ... .setOutputCol("sentence_embeddings") >>> docClassifier = ClassifierDLApproach() \ ... .setInputCols("sentence_embeddings") \ ... .setOutputCol("category") \ ... .setLabelColumn("label") \ ... .setBatchSize(64) \ ... .setMaxEpochs(20) \ ... .setLr(5e-3) \ ... .setDropout(0.5) >>> pipeline = Pipeline().setStages([ ... documentAssembler, ... useEmbeddings, ... docClassifier ... ]) >>> pipelineModel = pipeline.fit(smallCorpus)
- class ClassifierDLModel(classname='com.johnsnowlabs.nlp.annotators.classifier.dl.ClassifierDLModel', java_model=None)[source]#
ClassifierDL for generic Multi-class Text Classification.
ClassifierDL uses the state-of-the-art Universal Sentence Encoder as an input for text classifications. The ClassifierDL annotator uses a deep learning model (DNNs) we have built inside TensorFlow and supports up to 100 classes.
This is the instantiated model of the
ClassifierDLApproach
. For training your own model, please see the documentation of that class.Pretrained models can be loaded with
pretrained()
of the companion object:>>> classifierDL = ClassifierDLModel.pretrained() \ ... .setInputCols(["sentence_embeddings"]) \ ... .setOutputCol("classification")
The default model is
"classifierdl_use_trec6"
, if no name is provided. It uses embeddings from the UniversalSentenceEncoder and is trained on the TREC-6 dataset.For available pretrained models please see the Models Hub.
For extended examples of usage, see the Examples.
Input Annotation types
Output Annotation type
SENTENCE_EMBEDDINGS
CATEGORY
- Parameters:
- configProtoBytes
ConfigProto from tensorflow, serialized into byte array.
- classes
Get the tags used to trained this ClassifierDLModel
See also
MultiClassifierDLModel
for multi-class classification
SentimentDLModel
for sentiment analysis
Examples
>>> import sparknlp >>> from sparknlp.base import * >>> from sparknlp.annotator import * >>> from pyspark.ml import Pipeline >>> documentAssembler = DocumentAssembler() \ ... .setInputCol("text") \ ... .setOutputCol("document") >>> sentence = SentenceDetector() \ ... .setInputCols("document") \ ... .setOutputCol("sentence") >>> useEmbeddings = UniversalSentenceEncoder.pretrained() \ ... .setInputCols("document") \ ... .setOutputCol("sentence_embeddings") >>> sarcasmDL = ClassifierDLModel.pretrained("classifierdl_use_sarcasm") \ ... .setInputCols("sentence_embeddings") \ ... .setOutputCol("sarcasm") >>> pipeline = Pipeline() \ ... .setStages([ ... documentAssembler, ... sentence, ... useEmbeddings, ... sarcasmDL ... ]) >>> data = spark.createDataFrame([ ... ["I'm ready!"], ... ["If I could put into words how much I love waking up at 6 am on Mondays I would."] ... ]).toDF("text") >>> result = pipeline.fit(data).transform(data) >>> result.selectExpr("explode(arrays_zip(sentence, sarcasm)) as out") \ ... .selectExpr("out.sentence.result as sentence", "out.sarcasm.result as sarcasm") \ ... .show(truncate=False) +-------------------------------------------------------------------------------+-------+ |sentence |sarcasm| +-------------------------------------------------------------------------------+-------+ |I'm ready! |normal | |If I could put into words how much I love waking up at 6 am on Mondays I would.|sarcasm| +-------------------------------------------------------------------------------+-------+
- setConfigProtoBytes(b)[source]#
Sets configProto from tensorflow, serialized into byte array.
- Parameters:
- bList[int]
ConfigProto from tensorflow, serialized into byte array
- static pretrained(name='classifierdl_use_trec6', lang='en', remote_loc=None)[source]#
Downloads and loads a pretrained model.
- Parameters:
- namestr, optional
Name of the pretrained model, by default “classifierdl_use_trec6”
- langstr, optional
Language of the pretrained model, by default “en”
- remote_locstr, optional
Optional remote address of the resource, by default None. Will use Spark NLPs repositories otherwise.
- Returns:
- ClassifierDLModel
The restored model