sparknlp.annotator.audio.wav2vec2_for_ctc
#
Contains classes concerning Wav2Vec2ForCTC.
Module Contents#
Classes#
Wav2Vec2 Model with a language modeling head on top for Connectionist Temporal |
- class Wav2Vec2ForCTC(classname='com.johnsnowlabs.nlp.annotators.audio.Wav2Vec2ForCTC', java_model=None)[source]#
Wav2Vec2 Model with a language modeling head on top for Connectionist Temporal Classification (CTC). Wav2Vec2 was proposed in wav2vec 2.0: A Framework for Self-Supervised Learning of Speech Representations by Alexei Baevski, Henry Zhou, Abdelrahman Mohamed, Michael Auli.
The annotator takes audio files and transcribes it as text. The audio needs to be provided pre-processed an array of floats.
Note that this annotator is currently not supported on Apple Silicon processors such as the M1. This is due to the processor not supporting instructions for XLA.
Pretrained models can be loaded with
pretrained
of the companion object:>>> speechToText = Wav2Vec2ForCTC.pretrained() \ ... .setInputCols(["audio_assembler"]) \ ... .setOutputCol("text")
The default model is
"asr_wav2vec2_base_960h"
, if no name is provided.For available pretrained models please see the Models Hub.
To see which models are compatible and how to import them see JohnSnowLabs/spark-nlp#5669 and to see more extended examples, see Wav2Vec2ForCTCTestSpec.
Input Annotation types
Output Annotation type
AUDIO
DOCUMENT
- Parameters:
- batchSize
Size of each batch, by default 2
Examples
>>> import sparknlp >>> from sparknlp.base import * >>> from sparknlp.annotator import * >>> from pyspark.ml import Pipeline >>> audioAssembler = AudioAssembler() \ ... .setInputCol("audio_content") \ ... .setOutputCol("audio_assembler") >>> speechToText = Wav2Vec2ForCTC \ ... .pretrained() \ ... .setInputCols(["audio_assembler"]) \ ... .setOutputCol("text") >>> pipeline = Pipeline().setStages([audioAssembler, speechToText]) >>> processedAudioFloats = spark.createDataFrame([[rawFloats]]).toDF("audio_content") >>> result = pipeline.fit(processedAudioFloats).transform(processedAudioFloats) >>> result.select("text.result").show(truncate = False) +------------------------------------------------------------------------------------------+ |result | +------------------------------------------------------------------------------------------+ |[MISTER QUILTER IS THE APOSTLE OF THE MIDLE CLASES AND WE ARE GLAD TO WELCOME HIS GOSPEL ]| +------------------------------------------------------------------------------------------+
- setConfigProtoBytes(b)[source]#
Sets configProto from tensorflow, serialized into byte array.
- Parameters:
- bList[int]
ConfigProto from tensorflow, serialized into byte array
- static loadSavedModel(folder, spark_session)[source]#
Loads a locally saved model.
- Parameters:
- folderstr
Folder of the saved model
- spark_sessionpyspark.sql.SparkSession
The current SparkSession
- Returns:
- Wav2Vec2ForCTC
The restored model
- static pretrained(name='asr_wav2vec2_base_960h', lang='en', remote_loc=None)[source]#
Downloads and loads a pretrained model.
- Parameters:
- namestr, optional
Name of the pretrained model, by default “asr_wav2vec2_base_960h”
- langstr, optional
Language of the pretrained model, by default “en”
- remote_locstr, optional
Optional remote address of the resource, by default None. Will use Spark NLPs repositories otherwise.
- Returns:
- Wav2Vec2ForCTC
The restored model