English Legal BERT Embedding Large Cased model

Description

Pretrained BERT Embedding model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP. legalbert-large-1.7M-1 is a English model originally trained by pile-of-law.

Download Copy S3 URI

How to use

documentAssembler = DocumentAssembler() \
    .setInputCol("text") \
    .setOutputCol("document")

tokenizer = Tokenizer() \
    .setInputCols("document") \
    .setOutputCol("token")
  
embeddings = BertEmbeddings.pretrained("bert_embeddings_legalbert_large_1.7M_1","en") \
    .setInputCols(["document", "token"]) \
    .setOutputCol("embeddings")
    
pipeline = Pipeline(stages=[documentAssembler, tokenizer, embeddings])

data = spark.createDataFrame([["I love Spark NLP"]]).toDF("text")

result = pipeline.fit(data).transform(data)

Model Information

Model Name: bert_embeddings_legalbert_large_1.7M_1
Compatibility: Spark NLP 4.2.7+
License: Open Source
Edition: Official
Input Labels: [sentence]
Output Labels: [bert_sentence]
Language: en
Size: 646.5 MB
Case sensitive: true
Max sentence length: 128

References

  • https://huggingface.co/pile-of-law/legalbert-large-1.7M-1
  • https://github.com/LexPredict/lexpredict-lexnlp
  • https://arxiv.org/abs/2110.00976
  • https://arxiv.org/abs/1907.11692
  • https://arxiv.org/abs/1810.04805