sparknlp.annotator.embeddings.minilm_embeddings#

Contains classes for MiniLMEmbeddings.

Module Contents#

Classes#

MiniLMEmbeddings

Sentence embeddings using MiniLM.

class MiniLMEmbeddings(classname='com.johnsnowlabs.nlp.embeddings.MiniLMEmbeddings', java_model=None)[source]#

Sentence embeddings using MiniLM.

MiniLM, a lightweight and efficient sentence embedding model that can generate text embeddings for various NLP tasks (e.g., classification, retrieval, clustering, text evaluation, etc.) Note that this annotator is only supported for Spark Versions 3.4 and up.

Pretrained models can be loaded with pretrained() of the companion object:

>>> embeddings = MiniLMEmbeddings.pretrained() \
...     .setInputCols(["document"]) \
...     .setOutputCol("minilm_embeddings")

The default model is "minilm_l6_v2", if no name is provided.

For available pretrained models please see the Models Hub.

Input Annotation types

Output Annotation type

DOCUMENT

SENTENCE_EMBEDDINGS

Parameters:
batchSize

Size of every batch , by default 8

dimension

Number of embedding dimensions, by default 384

caseSensitive

Whether to ignore case in tokens for embeddings matching, by default False

maxSentenceLength

Max sentence length to process, by default 512

configProtoBytes

ConfigProto from tensorflow, serialized into byte array.

References

MiniLM: Deep Self-Attention Distillation for Task-Agnostic Compression of Pre-Trained Transformers

MiniLM Github Repository

Paper abstract

We present a simple and effective approach to compress large pre-trained Transformer models by distilling the self-attention module of the last Transformer layer. The compressed model (called MiniLM) can be trained with task-agnostic distillation and then fine-tuned on various downstream tasks. We evaluate MiniLM on the GLUE benchmark and show that it achieves comparable results with BERT-base while being 4.3x smaller and 5.5x faster. We also show that MiniLM can be further compressed to 22x smaller and 12x faster than BERT-base while maintaining comparable performance.

Examples

>>> import sparknlp
>>> from sparknlp.base import *
>>> from sparknlp.annotator import *
>>> from pyspark.ml import Pipeline
>>> documentAssembler = DocumentAssembler() \
...     .setInputCol("text") \
...     .setOutputCol("document")
>>> embeddings = MiniLMEmbeddings.pretrained() \
...     .setInputCols(["document"]) \
...     .setOutputCol("minilm_embeddings")
>>> embeddingsFinisher = EmbeddingsFinisher() \
...     .setInputCols(["minilm_embeddings"]) \
...     .setOutputCols("finished_embeddings") \
...     .setOutputAsVector(True)
>>> pipeline = Pipeline().setStages([
...     documentAssembler,
...     embeddings,
...     embeddingsFinisher
... ])
>>> data = spark.createDataFrame([["This is a sample sentence for embedding generation.",
... "Another example sentence to demonstrate MiniLM embeddings.",
... ]]).toDF("text")
>>> result = pipeline.fit(data).transform(data)
>>> result.selectExpr("explode(finished_embeddings) as result").show(5, 80)
+--------------------------------------------------------------------------------+
|                                                                          result|
+--------------------------------------------------------------------------------+
|[[0.1234567, -0.2345678, 0.3456789, -0.4567890, 0.5678901, -0.6789012...|
|[[0.2345678, -0.3456789, 0.4567890, -0.5678901, 0.6789012, -0.7890123...|
+--------------------------------------------------------------------------------+
name = 'MiniLMEmbeddings'[source]#
inputAnnotatorTypes[source]#
outputAnnotatorType = 'sentence_embeddings'[source]#
configProtoBytes[source]#
setConfigProtoBytes(b)[source]#

Sets configProto from tensorflow, serialized into byte array.

Parameters:
bList[int]

ConfigProto from tensorflow, serialized into byte array

static loadSavedModel(folder, spark_session, use_openvino=False)[source]#

Loads a locally saved model.

Parameters:
folderstr

Folder of the saved model

spark_sessionpyspark.sql.SparkSession

The current SparkSession

use_openvinobool

Use OpenVINO backend

Returns:
MiniLMEmbeddings

The restored model

static pretrained(name='minilm_l6_v2', lang='en', remote_loc=None)[source]#

Downloads and loads a pretrained model.

Parameters:
namestr, optional

Name of the pretrained model, by default “minilm_l6_v2”

langstr, optional

Language of the pretrained model, by default “en”

remote_locstr, optional

Optional remote address of the resource, by default None. Will use Spark NLPs repositories otherwise.

Returns:
MiniLMEmbeddings

The restored model