Phi-3.5-mini Q4_K_M GGUF

Description

Phi-3.5-mini is a lightweight, state-of-the-art open model built upon datasets used for Phi-3 - synthetic data and filtered publicly available websites - with a focus on very high-quality, reasoning dense data. The model belongs to the Phi-3 model family and supports 128K token context length.

Original model from https://huggingface.co/bartowski/Phi-3.5-mini-instruct-GGUF.

Predicted Entities

Download Copy S3 URI

How to use

import sparknlp
from sparknlp.base import *
from sparknlp.annotator import *
from pyspark.ml import Pipeline

document = DocumentAssembler() \
    .setInputCol("text") \
    .setOutputCol("document")

autoGGUFModel = AutoGGUFModel.pretrained() \
    .setInputCols(["document"]) \
    .setOutputCol("completions") \
    .setBatchSize(4) \
    .setNPredict(20) \
    .setNGpuLayers(99) \
    .setTemperature(0.4) \
    .setTopK(40) \
    .setTopP(0.9) \
    .setPenalizeNl(True)

pipeline = Pipeline().setStages([document, autoGGUFModel])
data = spark.createDataFrame([["Hello, I am a"]]).toDF("text")
result = pipeline.fit(data).transform(data)
result.select("completions").show(truncate = False)
import com.johnsnowlabs.nlp.base._
import com.johnsnowlabs.nlp.annotator._
import org.apache.spark.ml.Pipeline
import spark.implicits._
                                                                       
val document = new DocumentAssembler()
  .setInputCol("text")
  .setOutputCol("document")
                                                                       
val autoGGUFModel = AutoGGUFModel
  .pretrained()
  .setInputCols("document")
  .setOutputCol("completions")
  .setBatchSize(4)
  .setNPredict(20)
  .setNGpuLayers(99)
  .setTemperature(0.4f)
  .setTopK(40)
  .setTopP(0.9f)
  .setPenalizeNl(true)
                                                                       
val pipeline = new Pipeline().setStages(Array(document, autoGGUFModel))
                                                                       
val data = Seq("Hello, I am a").toDF("text")
val result = pipeline.fit(data).transform(data)
result.select("completions").show(truncate = false)

Results

+-----------------------------------------------------------------------------------------------------------------------------------+
|completions                                                                                                                        |
+-----------------------------------------------------------------------------------------------------------------------------------+
|[{document, 0, 78,  new user.  I am currently working on a project and I need to create a list of , {prompt -> Hello, I am a}, []}]|
+-----------------------------------------------------------------------------------------------------------------------------------+

Model Information

Model Name: phi3.5_mini_4k_instruct_q4_gguf
Compatibility: Spark NLP 5.5.0+
License: Open Source
Edition: Official
Input Labels: [document]
Output Labels: [completions]
Language: en
Size: 2.4 GB