English distil_asr_whisper_small WhisperForCTC from distil-whisper

Description

Pretrained WhisperForCTC model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.distil_asr_whisper_small is a English model originally trained by distil-whisper.

This model is only compatible with PySpark 3.4 and above

Predicted Entities

Download Copy S3 URI

How to use

audioAssembler = AudioAssembler() \
    .setInputCol("audio_content") \
    .setOutputCol("audio_assembler")
    
    
speechToText  = WhisperForCTC.pretrained("distil_asr_whisper_small","en") \
            .setInputCols(["audio_assembler"]) \
            .setOutputCol("text")

pipeline = Pipeline().setStages([audioAssembler, speechToText])

pipelineModel = pipeline.fit(data)

pipelineDF = pipelineModel.transform(data)
val audioAssembler = new AudioAssembler() 
    .setInputCol("audio_content") 
    .setOutputCol("audio_assembler")
    
val speechToText  = WhisperForCTC.pretrained("distil_asr_whisper_small","en") 
            .setInputCols(Array("audio_assembler")) 
            .setOutputCol("text")

val pipeline = new Pipeline().setStages(Array(audioAssembler, speechToText))

val pipelineModel = pipeline.fit(data)

val pipelineDF = pipelineModel.transform(data)

Model Information

Model Name: distil_asr_whisper_small
Compatibility: Spark NLP 5.2.4+
License: Open Source
Edition: Official
Input Labels: [audio_assembler]
Output Labels: [text]
Language: en
Size: 748.5 MB

References

https://huggingface.co/distil-whisper/distil-small.en