Real-time response streaming with the PromptKit SDK.

What You’ll Learn

Prerequisites

Running the Example

export OPENAI_API_KEY=your-key
go run .

Code Overview

conv, err := sdk.Open("./streaming.pack.json", "storyteller")
if err != nil {
    log.Fatal(err)
}
defer conv.Close()

ctx := context.Background()

// Stream responses in real-time
for chunk := range conv.Stream(ctx, "Tell me a short story") {
    if chunk.Error != nil {
        log.Printf("Error: %v", chunk.Error)
        break
    }
    if chunk.Type == sdk.ChunkDone {
        fmt.Println("\n[Complete]")
        break
    }
    // Print text as it arrives
    fmt.Print(chunk.Text)
}

Chunk Types

const (
    ChunkText     // Text content arrived
    ChunkToolCall // Tool is being called
    ChunkDone     // Stream completed
)

Pack File Structure

The streaming.pack.json defines:

{
  "prompts": {
    "storyteller": {
      "system_template": "You are a creative storyteller...",
      "parameters": {
        "temperature": 0.9,
        "max_tokens": 500
      }
    }
  }
}

Progress Tracking

Track generation progress while streaming:

var charCount int

for chunk := range conv.Stream(ctx, "Tell me about AI") {
    if chunk.Type == sdk.ChunkDone {
        fmt.Printf("\n[Complete - %d characters]\n", charCount)
        break
    }
    fmt.Print(chunk.Text)
    charCount += len(chunk.Text)
}

Key Concepts

  1. Channel-Based - Stream() returns a channel of chunks
  2. Non-Blocking - Print responses as they arrive
  3. Error Handling - Check chunk.Error for issues
  4. Completion - ChunkDone signals end of stream

Next Steps