Spell Checking Pipeline for English

Description

The check_spelling is a pretrained pipeline that we can use to process text with a simple pipeline that performs basic processing steps and recognizes entities . It performs most of the common text processing tasks on your dataframe

Live Demo Open in Colab Download Copy S3 URI

How to use


from sparknlp.pretrained import PretrainedPipelinein
pipeline = PretrainedPipeline('check_spelling', lang = 'en')
annotations =  pipeline.fullAnnotate(""Hello from John Snow Labs ! "")[0]
annotations.keys()


val pipeline = new PretrainedPipeline("check_spelling", lang = "en")
val result = pipeline.fullAnnotate("Hello from John Snow Labs ! ")(0)



import nlu
text = [""Hello from John Snow Labs ! ""]
result_df = nlu.load('').predict(text)
result_df
    

Results

|    | document                         | sentence                        | token                                          | checked                                        |
|---:|:---------------------------------|:--------------------------------|:-----------------------------------------------|:-----------------------------------------------|
|  0 | ['I liek to live dangertus ! '] | ['I liek to live dangertus !'] | ['I', 'liek', 'to', 'live', 'dangertus', '!'] | ['I', 'like', 'to', 'live', 'dangerous', '!'] |

Model Information

Model Name: check_spelling
Type: pipeline
Compatibility: Spark NLP 3.0.0+
License: Open Source
Edition: Official
Language: en

Included Models

  • DocumentAssembler
  • SentenceDetector
  • TokenizerModel
  • NorvigSweetingModel