Skip to content

Streaming Example

Real-time response streaming with the PromptKit SDK.

  • Using Stream() for real-time responses
  • Processing chunks as they arrive
  • Handling stream completion and errors
  • Progress tracking during generation
  • Go 1.21+
  • OpenAI API key
Terminal window
export OPENAI_API_KEY=your-key
go run .
conv, err := sdk.Open("./streaming.pack.json", "storyteller")
if err != nil {
log.Fatal(err)
}
defer conv.Close()
ctx := context.Background()
// Stream responses in real-time
for chunk := range conv.Stream(ctx, "Tell me a short story") {
if chunk.Error != nil {
log.Printf("Error: %v", chunk.Error)
break
}
if chunk.Type == sdk.ChunkDone {
fmt.Println("\n[Complete]")
break
}
// Print text as it arrives
fmt.Print(chunk.Text)
}
const (
ChunkText // Text content arrived
ChunkToolCall // Tool is being called
ChunkDone // Stream completed
)

The streaming.pack.json defines:

  • Provider: OpenAI with gpt-4o-mini
  • Prompt: A creative storyteller with higher temperature
{
"prompts": {
"storyteller": {
"system_template": "You are a creative storyteller...",
"parameters": {
"temperature": 0.9,
"max_tokens": 500
}
}
}
}

Track generation progress while streaming:

var charCount int
for chunk := range conv.Stream(ctx, "Tell me about AI") {
if chunk.Type == sdk.ChunkDone {
fmt.Printf("\n[Complete - %d characters]\n", charCount)
break
}
fmt.Print(chunk.Text)
charCount += len(chunk.Text)
}
  1. Channel-Based - Stream() returns a channel of chunks
  2. Non-Blocking - Print responses as they arrive
  3. Error Handling - Check chunk.Error for issues
  4. Completion - ChunkDone signals end of stream