This example demonstrates multimodal (vision) capabilities using the PromptKit SDK with streaming responses.

Features

Prerequisites

  1. A Google Gemini API key (for vision capabilities)
  2. Go 1.21 or later

Setup

export GEMINI_API_KEY=your-gemini-api-key

Running the Example

cd sdk/examples/multimodal
go run .

How It Works

Opening a Multimodal Conversation

conv, err := sdk.Open("./multimodal.pack.json", "vision-analyst")
if err != nil {
    log.Fatalf("Failed to open pack: %v", err)
}
defer conv.Close()

Streaming Image Analysis

for chunk := range conv.Stream(ctx, "What do you see in this image?",
    sdk.WithImageURL("https://example.com/image.jpg"),
) {
    if chunk.Error != nil {
        log.Printf("Error: %v", chunk.Error)
        break
    }
    if chunk.Type == sdk.ChunkDone {
        break
    }
    fmt.Print(chunk.Text)
}

Non-Streaming Image Analysis

resp, err := conv.Send(ctx, "Describe this image",
    sdk.WithImageURL("https://example.com/image.jpg"),
)
if err != nil {
    log.Fatalf("Error: %v", err)
}
fmt.Println(resp.Text())

Image Input Options

The SDK supports multiple ways to provide images:

From URL

sdk.WithImageURL("https://example.com/image.jpg")

From File

sdk.WithImageFile("/path/to/local/image.png")

From Raw Data

sdk.WithImageData(imageBytes, "image/png")

Supported Providers

Multimodal capabilities require a provider that supports vision:

Pack Configuration

The pack file configures the vision analyst prompt:

{
  "prompts": {
    "vision-analyst": {
      "id": "vision-analyst",
      "name": "Vision Analyst",
      "system_template": "You are an expert visual analyst...",
      "parameters": {
        "temperature": 0.7,
        "max_tokens": 1024
      }
    }
  }
}

Notes