sparknlp.annotator.embeddings.camembert_embeddings
#
Contains classes for CamemBertEmbeddings.
Module Contents#
Classes#
The CamemBERT model was proposed in CamemBERT: a Tasty French Language Model by |
- class CamemBertEmbeddings(classname='com.johnsnowlabs.nlp.embeddings.CamemBertEmbeddings', java_model=None)[source]#
The CamemBERT model was proposed in CamemBERT: a Tasty French Language Model by Louis Martin, Benjamin Muller, Pedro Javier Ortiz Suárez, Yoann Dupont, Laurent Romary, Éric Villemonte de la Clergerie, Djamé Seddah, and Benoît Sagot.
It is based on Facebook’s RoBERTa model released in 2019. It is a model trained on 138GB of French text.
Pretrained models can be loaded with
pretrained
of the companion object:>>> embeddings = CamemBertEmbeddings.pretrained() \ ... .setInputCols(["token", "document"]) \ ... .setOutputCol("camembert_embeddings")
The default model is
"camembert_base"
, if no name is provided.For available pretrained models please see the Models Hub.
For extended examples of usage, see the Examples and the CamemBertEmbeddingsTestSpec.
To see which models are compatible and how to import them see JohnSnowLabs/spark-nlp#5669.
Input Annotation types
Output Annotation type
DOCUMENT, TOKEN
WORD_EMBEDDINGS
- Parameters:
- batchSize
Size of every batch , by default 8
- dimension
Number of embedding dimensions, by default 768
- caseSensitive
Whether to ignore case in tokens for embeddings matching, by default False
- maxSentenceLength
Max sentence length to process, by default 128
- configProtoBytes
ConfigProto from tensorflow, serialized into byte array.
References
CamemBERT: a Tasty French Language Model
https://huggingface.co/camembert
Paper abstract
Pretrained language models are now ubiquitous in Natural Language Processing. Despite their success, most available models have either been trained on English data or on the concatenation of data in multiple languages. This makes practical use of such models –in all languages except English– very limited. In this paper, we investigate the feasibility of training monolingual Transformer-based language models for other languages, taking French as an example and evaluating our language models on part-of-speech tagging, dependency parsing, named entity recognition and natural language inference tasks. We show that the use of web crawled data is preferable to the use of Wikipedia data. More surprisingly, we show that a relatively small web crawled dataset (4GB) leads to results that are as good as those obtained using larger datasets (130+GB). Our best performing model CamemBERT reaches or improves the state of the art in all four downstream tasks.
Examples
>>> import sparknlp >>> from sparknlp.base import * >>> from sparknlp.annotator import * >>> from pyspark.ml import Pipeline >>> documentAssembler = DocumentAssembler() \ ... .setInputCol("text") \ ... .setOutputCol("document") >>> tokenizer = Tokenizer() \ ... .setInputCols(["document"]) \ ... .setOutputCol("token") >>> embeddings = CamemBertEmbeddings.pretrained() \ ... .setInputCols(["token", "document"]) \ ... .setOutputCol("camembert_embeddings") >>> embeddingsFinisher = EmbeddingsFinisher() \ ... .setInputCols(["camembert_embeddings"]) \ ... .setOutputCols("finished_embeddings") \ ... .setOutputAsVector(True) >>> pipeline = Pipeline().setStages([ ... documentAssembler, ... tokenizer, ... embeddings, ... embeddingsFinisher ... ]) >>> data = spark.createDataFrame([["C'est une phrase."]]).toDF("text") >>> result = pipeline.fit(data).transform(data) >>> result.selectExpr("explode(finished_embeddings) as result").show(5, 80) +--------------------------------------------------------------------------------+ | result| +--------------------------------------------------------------------------------+ |[0.08442357927560806,-0.12863239645957947,-0.03835778683423996,0.200479581952...| |[0.048462312668561935,0.12637358903884888,-0.27429091930389404,-0.07516729831...| |[0.02690504491329193,0.12104076147079468,0.012526623904705048,-0.031543646007...| |[0.05877285450696945,-0.08773420006036758,-0.06381352990865707,0.122621834278...| +--------------------------------------------------------------------------------+
- setConfigProtoBytes(b)[source]#
Sets configProto from tensorflow, serialized into byte array.
- Parameters:
- bList[int]
ConfigProto from tensorflow, serialized into byte array
- static loadSavedModel(folder, spark_session)[source]#
Loads a locally saved model.
- Parameters:
- folderstr
Folder of the saved model
- spark_sessionpyspark.sql.SparkSession
The current SparkSession
- Returns:
- CamemBertEmbeddings
The restored model
- static pretrained(name='camembert_base', lang='fr', remote_loc=None)[source]#
Downloads and loads a pretrained model.
- Parameters:
- namestr, optional
Name of the pretrained model, by default “camembert_base”
- langstr, optional
Language of the pretrained model, by default “fr”
- remote_locstr, optional
Optional remote address of the resource, by default None. Will use Spark NLPs repositories otherwise.
- Returns:
- CamemBertEmbeddings
The restored model