Real-time response streaming with the PromptKit SDK.
What You’ll Learn
- Using
Stream()for real-time responses - Processing chunks as they arrive
- Handling stream completion and errors
- Progress tracking during generation
Prerequisites
- Go 1.21+
- OpenAI API key
Running the Example
export OPENAI_API_KEY=your-key
go run .
Code Overview
conv, err := sdk.Open("./streaming.pack.json", "storyteller")
if err != nil {
log.Fatal(err)
}
defer conv.Close()
ctx := context.Background()
// Stream responses in real-time
for chunk := range conv.Stream(ctx, "Tell me a short story") {
if chunk.Error != nil {
log.Printf("Error: %v", chunk.Error)
break
}
if chunk.Type == sdk.ChunkDone {
fmt.Println("\n[Complete]")
break
}
// Print text as it arrives
fmt.Print(chunk.Text)
}
Chunk Types
const (
ChunkText // Text content arrived
ChunkToolCall // Tool is being called
ChunkDone // Stream completed
)
Pack File Structure
The streaming.pack.json defines:
- Provider: OpenAI with
gpt-4o-mini - Prompt: A creative storyteller with higher temperature
{
"prompts": {
"storyteller": {
"system_template": "You are a creative storyteller...",
"parameters": {
"temperature": 0.9,
"max_tokens": 500
}
}
}
}
Progress Tracking
Track generation progress while streaming:
var charCount int
for chunk := range conv.Stream(ctx, "Tell me about AI") {
if chunk.Type == sdk.ChunkDone {
fmt.Printf("\n[Complete - %d characters]\n", charCount)
break
}
fmt.Print(chunk.Text)
charCount += len(chunk.Text)
}
Key Concepts
- Channel-Based -
Stream()returns a channel of chunks - Non-Blocking - Print responses as they arrive
- Error Handling - Check
chunk.Errorfor issues - Completion -
ChunkDonesignals end of stream
Next Steps
- Hello Example - Basic conversation
- Tools Example - Function calling
- HITL Example - Human-in-the-loop approval
Was this page helpful?