BLIP Question Answering

Description

BLIP Model for visual question answering. The model consists of a vision encoder, a text encoder as well as a text decoder. The vision encoder will encode the input image, the text encoder will encode the input question together with the encoding of the image, and the text decoder will output the answer to the question.

Predicted Entities

Open in Colab Download Copy S3 URI

How to use

To proceed, please create a DataFrame with two columns:

  • An image column that contains the file path for each image in the directory.
  • A text column where you can input the specific question you would like to ask about each image.

For example:

from pyspark.sql.functions import lit

images_path = "./images/"
image_df = spark.read.format("image").load(path=images_path)

test_df = image_df.withColumn("text", lit("What's this picture about?"))
test_df.show()
imageAssembler = ImageAssembler() \
  .setInputCol("image") \
  .setOutputCol("image_assembler") \

imageClassifier = BLIPForQuestionAnswering.load("./{}_spark_nlp".format(MODEL_NAME)) \
  .setInputCols("image_assembler") \
  .setOutputCol("answer") \
  .setSize(384)

pipeline = Pipeline(
    stages=[
        imageAssembler,
        imageClassifier,
    ]
)

model = pipeline.fit(test_df)
result = model.transform(test_df)
result.select("image_assembler.origin", "answer.result").show(truncate = False)
val imageAssembler: ImageAssembler = new ImageAssembler()
      .setInputCol("image")
      .setOutputCol("image_assembler")

val loadModel = BLIPForQuestionAnswering
  .pretrained()
  .setInputCols("image_assembler")
  .setOutputCol("answer")
  .setSize(384)

val newPipeline: Pipeline =
  new Pipeline().setStages(Array(imageAssembler, loadModel))

newPipeline.fit(testDF)
val result = model.transform(testDF)

result.select("image_assembler.origin", "answer.result").show(truncate = false)

Model Information

Model Name: blip_vqa_base
Compatibility: Spark NLP 5.5.0+
License: Open Source
Edition: Official
Language: en
Size: 1.4 GB