Clean patterns pipeline for English

Description

The clean_pattern is a pretrained pipeline that we can use to process text with a simple pipeline that performs basic processing steps and recognizes entities . It performs most of the common text processing tasks on your dataframe

Predicted Entities

Download Copy S3 URI

How to use

from sparknlp.pretrained import PretrainedPipelinein
pipeline = PretrainedPipeline('clean_pattern', lang = 'en')
annotations =  pipeline.fullAnnotate(""Hello from John Snow Labs ! "")[0]
annotations.keys()
val pipeline = new PretrainedPipeline("clean_pattern", lang = "en")
val result = pipeline.fullAnnotate("Hello from John Snow Labs ! ")(0)
import nlu
text = [""Hello from John Snow Labs ! ""]
result_df = nlu.load('en.clean.pattern').predict(text)
result_df

Results

Results



|    | document   | sentence   | token     | normal    |
|---:|:-----------|:-----------|:----------|:----------|
|  0 | ['Hello']  | ['Hello']  | ['Hello'] | ['Hello'] ||    | document                         | sentence                        | token                                          | normal                                    |


{:.model-param}

Model Information

Model Name: clean_pattern
Type: pipeline
Compatibility: Spark NLP 4.4.2+
License: Open Source
Edition: Official
Language: en
Size: 17.2 KB

Included Models

  • DocumentAssembler
  • SentenceDetector
  • TokenizerModel
  • NormalizerModel