English distil_asr_whisper_mediumWhisperForCTC from distil-whisper

Description

Pretrained WhisperForCTC model, adapted from Hugging Face and curated to provide scalability and production-readiness using Spark NLP.distil_asr_whisper_medium is a English model originally trained by distil-whisper.

This model is only compatible with PySpark 3.4 and above

Predicted Entities

Download Copy S3 URI

How to use

audioAssembler = AudioAssembler() \
    .setInputCol("audio_content") \
    .setOutputCol("audio_assembler")


speechToText  = WhisperForCTC.pretrained("distil_asr_whisper_medium","en") \
            .setInputCols(["audio_assembler"]) \
            .setOutputCol("text")

pipeline = Pipeline().setStages([audioAssembler, speechToText])

pipelineModel = pipeline.fit(data)

pipelineDF = pipelineModel.transform(data)
val audioAssembler = new AudioAssembler() 
    .setInputCol("audio_content") 
    .setOutputCol("audio_assembler")
    
val speechToText  = WhisperForCTC.pretrained("distil_asr_whisper_medium","en") 
            .setInputCols(Array("audio_assembler")) 
            .setOutputCol("text")
val pipeline = new Pipeline().setStages(Array(audioAssembler, speechToText))
val pipelineModel = pipeline.fit(data)
val pipelineDF = pipelineModel.transform(data)

Model Information

Model Name: distil_asr_whisper_medium
Compatibility: Spark NLP 5.2.4+
License: Open Source
Edition: Official
Input Labels: [audio_assembler]
Output Labels: [text]
Language: en
Size: 1.4 GB

References

https://huggingface.co/distil-whisper/distil-medium.en