Description
Legal Word Embeddings lookup annotator that maps tokens to vectors. Trained on legal text after lemmatization.
How to use
model = WordEmbeddingsModel.pretrained("word2vec_osf_lemmatized_legal","en")\
.setInputCols(["document","token"])\
.setOutputCol("word_embeddings")
val model = WordEmbeddingsModel.pretrained("word2vec_osf_lemmatized_legal","en")
.setInputCols("document","token")
.setOutputCol("word_embeddings")
import nlu
nlu.load("en.embed.legal.osf_lemmatized_legal").predict("""Put your text here.""")
Model Information
| Model Name: | word2vec_osf_lemmatized_legal |
| Type: | embeddings |
| Compatibility: | Spark NLP 4.2.5+ |
| License: | Open Source |
| Edition: | Official |
| Input Labels: | [document, token] |
| Output Labels: | [embeddings] |
| Language: | en |
| Size: | 53.9 MB |
| Case sensitive: | false |
| Dimension: | 100 |
References
https://osf.io/