Description
This model identifies the sentiments (positive or negative) in Turkish texts.
Predicted Entities
POSITIVE
, NEGATIVE
Live Demo Open in Colab Download Copy S3 URI
How to use
document = DocumentAssembler()\
.setInputCol("text")\
.setOutputCol("document")\
.setCleanupMode("shrink")
embeddings = UniversalSentenceEncoder.pretrained("tfhub_use_multi", "xx") \
.setInputCols("document") \
.setOutputCol("sentence_embeddings")
sentimentClassifier = ClassifierDLModel.pretrained("classifierdl_use_sentiment", "tr") \
.setInputCols(["document", "sentence_embeddings"]) \
.setOutputCol("class")
fr_sentiment_pipeline = Pipeline(stages=[document, embeddings, sentimentClassifier])
light_pipeline = LightPipeline(fr_sentiment_pipeline.fit(spark.createDataFrame([['']]).toDF("text")))
result1 = light_pipeline.annotate("Bu sıralar kafam çok karışık.")
result2 = light_pipeline.annotate("Sınavımı geçtiğimi öğrenince derin bir nefes aldım.")
print(result1["class"], result2["class"], sep = "\n")
val document = DocumentAssembler()
.setInputCol("text")
.setOutputCol("document")
val embeddings = UniversalSentenceEncoder.pretrained("tfhub_use_multi", "xx")
.setInputCols("document")
.setOutputCol("sentence_embeddings")
val sentimentClassifier = ClassifierDLModel.pretrained("classifierdl_bert_sentiment", "tr")
.setInputCols(Array("document", "sentence_embeddings"))
.setOutputCol("class")
val fr_sentiment_pipeline = new Pipeline().setStages(Array(document, embeddings, sentimentClassifier))
val light_pipeline = LightPipeline(fr_sentiment_pipeline.fit(spark.createDataFrame([['']]).toDF("text")))
val result1 = light_pipeline.annotate("Bu sıralar kafam çok karışık.")
val result2 = light_pipeline.annotate("Sınavımı geçtiğimi öğrenince derin bir nefes aldım.")
Results
['NEGATIVE']
['POSITIVE']
Model Information
Model Name: | classifierdl_use_sentiment |
Compatibility: | Spark NLP 3.3.0+ |
License: | Open Source |
Edition: | Official |
Input Labels: | [sentence_embeddings] |
Output Labels: | [class] |
Language: | tr |
Data Source
https://raw.githubusercontent.com/gurkandyilmaz/sentiment/master/data/
Benchmarking
precision recall f1-score support
NEGATIVE 0.86 0.88 0.87 19967
POSITIVE 0.88 0.85 0.86 19826
accuracy 0.87 39793
macro avg 0.87 0.87 0.87 39793
weighted avg 0.87 0.87 0.87 39793