Deepa NER XLMRoberta Large Model : deepa_xlmroberta_ner_large_panx

Description

NER model XLM Roberta Large Model

Predicted Entities

Download Copy S3 URI

How to use

documentAssembler = DocumentAssembler() \
    .setInputCol("text") \
    .setOutputCol("document")

# Create a custom Tokenizer that splits text based on spaces
tokenizer = RegexTokenizer() \
    .setInputCols(["document"]) \
    .setOutputCol("token").setPattern("\\s+") \

# deepa_xlmroberta_ner_large_en_panx
token_classifier = XlmRoBertaForTokenClassification.pretrained("deepa_xlmroberta_ner_large_panx", "en") \
    .setInputCols(["document", "token"]) \
    .setOutputCol("ner")

ner_converter = NerConverter() \
    .setInputCols(["document", "token", "ner"]) \
    .setOutputCol("ner_chunk")

pipeline = Pipeline(stages=[documentAssembler, tokenizer, token_classifier, ner_converter])

Model Information

Model Name: deepa_xlmroberta_ner_large_panx_dataset
Compatibility: Spark NLP 4.1.0+
License: Open Source
Edition: Community
Input Labels: [document, token]
Output Labels: [ner]
Language: en
Size: 1.8 GB
Case sensitive: true
Max sentence length: 256