Description
This model is trained on Common Crawl and Wikipedia using fastText. It is trained using CBOW with position-weights, in dimension 300, with character n-grams of length 5, a window of size 5 and 10 negatives.
The model gives 300 dimensional vector outputs per token. The output vectors map words into a meaningful space where the distance between the vectors is related to semantic similarity of words.
These embeddings can be used in multiple tasks like semantic word similarity, named entity recognition, sentiment analysis, and classification.
How to use
Use as part of a pipeline after tokenization.
...
embeddings = WordEmbeddingsModel.pretrained("persian_w2v_cc_300d", "fa") \
.setInputCols(["document", "token"]) \
.setOutputCol("embeddings")
nlp_pipeline = Pipeline(stages=[document_assembler, sentence_detector, tokenizer, embeddings])
pipeline_model = nlp_pipeline.fit(spark.createDataFrame([[""]]).toDF("text"))
result = pipeline_model.transform(spark.createDataFrame([['من یادگیری ماشین را دوست دارم']], ["text"]))
val embeddings = WordEmbeddingsModel.pretrained("persian_w2v_cc_300d", "fa")
.setInputCols(Array("document", "token"))
.setOutputCol("embeddings")
val pipeline = new Pipeline().setStages(Array(document_assembler, sentence_detector, tokenizer, embeddings))
val data = Seq("من یادگیری ماشین را دوست دارم").toDF("text")
val result = pipeline.fit(data).transform(data)
import nlu
text = ["""من یادگیری ماشین را دوست دارم"""]
farvec_df = nlu.load('fa.embed.word2vec.300d').predict(text, output_level='token')
farvec_df
Results
The model gives 300 dimensional Word2Vec feature vector outputs per token.
| token | fa_embed_word2vec_300d_embeddings
|-------|--------------------------------------------------
| من | [-0.3861289620399475, -0.08295578509569168, -0...
| را | [-0.15430298447608948, -0.24924889206886292, 0...
| دوست | [0.07587642222642899, -0.24341894686222076, 0....
| دارم | [0.0899219810962677, -0.21863090991973877, 0.4...
Model Information
Model Name: | persian_w2v_cc_300d |
Type: | embeddings |
Compatibility: | Spark NLP 2.7.0+ |
License: | Open Source |
Edition: | Official |
Input Labels: | [document, token] |
Output Labels: | [word_embeddings] |
Language: | fa |
Case sensitive: | false |
Dimension: | 300 |
Data Source
This model is imported from https://fasttext.cc/docs/en/crawl-vectors.html