SDK API Reference
import "github.com/AltairaLabs/PromptKit/sdk"Package sdk provides a simple API for LLM conversations using PromptPack files.
SDK v2 is a pack-first SDK where everything starts from a .pack.json file. The pack contains prompts, variables, tools, validators, and model configuration. The SDK loads the pack and provides a minimal API to interact with LLMs.
Quick Start
Section titled “Quick Start”The simplest possible usage is just 5 lines:
conv, err := sdk.Open("./assistant.pack.json", "chat")if err != nil { log.Fatal(err)}defer conv.Close()
resp, _ := conv.Send(ctx, "Hello!")fmt.Println(resp.Text())Core Concepts
Section titled “Core Concepts”Opening a Conversation:
Use Open to load a pack file and create a conversation for a specific prompt:
// Minimal - provider auto-detected from environmentconv, _ := sdk.Open("./demo.pack.json", "troubleshooting")
// With options - override model, provider, etc.conv, _ := sdk.Open("./demo.pack.json", "troubleshooting", sdk.WithModel("gpt-4o"), sdk.WithAPIKey(os.Getenv("MY_OPENAI_KEY")),)Variables:
Variables defined in the pack are populated at runtime:
conv.SetVar("customer_id", "acme-corp")conv.SetVars(map[string]any{ "customer_name": "ACME Corporation", "tier": "premium",})Tools:
Tools defined in the pack just need implementation handlers:
conv.OnTool("list_devices", func(args map[string]any) (any, error) { return myAPI.ListDevices(args["customer_id"].(string))})Streaming:
Stream responses chunk by chunk:
for chunk := range conv.Stream(ctx, "Tell me a story") { fmt.Print(chunk.Text)}Design Principles
Section titled “Design Principles”- Pack is the Source of Truth - The .pack.json file defines prompts, tools, validators, and pipeline configuration. The SDK configures itself automatically.
- Convention Over Configuration - API keys from environment, provider auto-detection, model defaults from pack. Override only when needed.
- Progressive Disclosure - Simple things are simple, advanced features available but not required.
- Same Runtime, Same Behavior - SDK v2 uses the same runtime pipeline as Arena. Pack-defined behaviors work identically.
- Thin Wrapper - No type duplication. Core types like Message, ContentPart, CostInfo come directly from runtime/types.
Package Structure
Section titled “Package Structure”The SDK is organized into sub-packages for specific functionality:
- sdk (this package): Entry point, Open, Conversation, Response
- sdk/tools: Typed tool handlers, HITL support
- sdk/stream: Streaming response handling
- sdk/message: Multimodal message building
- sdk/hooks: Event subscription and lifecycle hooks
- sdk/validation: Validator registration and error handling
Most users only need to import the root sdk package.
Runtime Types
Section titled “Runtime Types”The SDK uses runtime types directly - no duplication:
import "github.com/AltairaLabs/PromptKit/runtime/types"
msg := &types.Message{Role: "user"}msg.AddTextPart("Hello")Key runtime types: [types.Message], [types.ContentPart], [types.MediaContent], [types.CostInfo], [types.ValidationResult].
Schema Reference
Section titled “Schema Reference”All pack examples conform to the PromptPack Specification v1.1.0: https://github.com/AltairaLabs/promptpack-spec/blob/main/schema/promptpack.schema.json
- Variables
- type ChunkType
- type Conversation
- func Open(packPath, promptName string, opts …Option) (*Conversation, error)
- func OpenDuplex(packPath, promptName string, opts …Option) (*Conversation, error)
- func Resume(conversationID, packPath, promptName string, opts …Option) (*Conversation, error)
- func (c *Conversation) CheckPending(name string, args map[string]any) (*sdktools.PendingToolCall, bool)
- func (c *Conversation) Clear() error
- func (c *Conversation) Close() error
- func (c *Conversation) Continue(ctx context.Context) (*Response, error)
- func (c *Conversation) Done() (<-chan struct{}, error)
- func (c *Conversation) EventBus() *events.EventBus
- func (c *Conversation) Fork() *Conversation
- func (c *Conversation) GetVar(name string) (string, bool)
- func (c *Conversation) ID() string
- func (c *Conversation) Messages(ctx context.Context) []types.Message
- func (c *Conversation) OnTool(name string, handler ToolHandler)
- func (c *Conversation) OnToolAsync(name string, checkFunc func(args map[string]any) sdktools.PendingResult, execFunc ToolHandler)
- func (c *Conversation) OnToolCtx(name string, handler ToolHandlerCtx)
- func (c *Conversation) OnToolExecutor(name string, executor tools.Executor)
- func (c *Conversation) OnToolHTTP(name string, config *sdktools.HTTPToolConfig)
- func (c *Conversation) OnTools(handlers map[string]ToolHandler)
- func (c *Conversation) PendingTools() []*sdktools.PendingToolCall
- func (c *Conversation) RejectTool(id, reason string) (*sdktools.ToolResolution, error)
- func (c *Conversation) ResolveTool(id string) (*sdktools.ToolResolution, error)
- func (c *Conversation) Response() (<-chan providers.StreamChunk, error)
- func (c *Conversation) Send(ctx context.Context, message any, opts …SendOption) (*Response, error)
- func (c *Conversation) SendChunk(ctx context.Context, chunk *providers.StreamChunk) error
- func (c *Conversation) SendFrame(ctx context.Context, frame *session.ImageFrame) error
- func (c *Conversation) SendText(ctx context.Context, text string) error
- func (c *Conversation) SendVideoChunk(ctx context.Context, chunk *session.VideoChunk) error
- func (c *Conversation) SessionError() error
- func (c *Conversation) SetVar(name, value string)
- func (c *Conversation) SetVars(vars map[string]any)
- func (c *Conversation) SetVarsFromEnv(prefix string)
- func (c *Conversation) Stream(ctx context.Context, message any, opts …SendOption) <-chan StreamChunk
- func (c *Conversation) StreamRaw(ctx context.Context, message any) (<-chan streamPkg.Chunk, error)
- func (c *Conversation) ToolRegistry() *tools.Registry
- func (c *Conversation) TriggerStart(ctx context.Context, message string) error
- type CredentialOption
- type MCPServerBuilder
- type Option
- func WithAPIKey(key string) Option
- func WithAutoResize(maxWidth, maxHeight int) Option
- func WithAzure(endpoint string, opts …PlatformOption) Option
- func WithBedrock(region string, opts …PlatformOption) Option
- func WithConversationID(id string) Option
- func WithDisabledValidators(names …string) Option
- func WithEventBus(bus *events.EventBus) Option
- func WithEventStore(store events.EventStore) Option
- func WithImagePreprocessing(cfg *stage.ImagePreprocessConfig) Option
- func WithJSONMode() Option
- func WithMCP(name, command string, args …string) Option
- func WithMCPServer(builder *MCPServerBuilder) Option
- func WithModel(model string) Option
- func WithProvider(p providers.Provider) Option
- func WithRelevanceTruncation(cfg *RelevanceConfig) Option
- func WithResponseFormat(format *providers.ResponseFormat) Option
- func WithSkipSchemaValidation() Option
- func WithStateStore(store statestore.Store) Option
- func WithStreamingConfig(streamingConfig *providers.StreamingInputConfig) Option
- func WithStreamingVideo(cfg *VideoStreamConfig) Option
- func WithStrictValidation() Option
- func WithTTS(service tts.Service) Option
- func WithTokenBudget(tokens int) Option
- func WithToolRegistry(registry *tools.Registry) Option
- func WithTruncation(strategy string) Option
- func WithTurnDetector(detector audio.TurnDetector) Option
- func WithVADMode(sttService stt.Service, ttsService tts.Service, cfg *VADModeConfig) Option
- func WithValidationMode(mode ValidationMode) Option
- func WithVariableProvider(p variables.Provider) Option
- func WithVariables(vars map[string]string) Option
- func WithVertex(region, project string, opts …PlatformOption) Option
- type PackError
- type PendingTool
- type PlatformOption
- type ProviderError
- type RelevanceConfig
- type Response
- func (r *Response) Cost() float64
- func (r *Response) Duration() time.Duration
- func (r *Response) HasMedia() bool
- func (r *Response) HasToolCalls() bool
- func (r *Response) InputTokens() int
- func (r *Response) Message() *types.Message
- func (r *Response) OutputTokens() int
- func (r *Response) Parts() []types.ContentPart
- func (r *Response) PendingTools() []PendingTool
- func (r *Response) Text() string
- func (r *Response) TokensUsed() int
- func (r *Response) ToolCalls() []types.MessageToolCall
- func (r *Response) Validations() []types.ValidationResult
- type SendOption
- func WithAudioData(data []byte, mimeType string) SendOption
- func WithAudioFile(path string) SendOption
- func WithDocumentData(data []byte, mimeType string) SendOption
- func WithDocumentFile(path string) SendOption
- func WithFile(name string, data []byte) SendOption
- func WithImageData(data []byte, mimeType string, detail …*string) SendOption
- func WithImageFile(path string, detail …*string) SendOption
- func WithImageURL(url string, detail …*string) SendOption
- func WithVideoData(data []byte, mimeType string) SendOption
- func WithVideoFile(path string) SendOption
- type SessionMode
- type StreamChunk
- type ToolError
- type ToolHandler
- type ToolHandlerCtx
- type VADModeConfig
- type ValidationError
- type ValidationMode
- type VideoStreamConfig
Variables
Section titled “Variables”Sentinel errors for common failure cases.
var ( // ErrConversationClosed is returned when Send or Stream is called on a closed conversation. ErrConversationClosed = errors.New("conversation is closed")
// ErrConversationNotFound is returned by Resume when the conversation ID doesn't exist. ErrConversationNotFound = errors.New("conversation not found")
// ErrNoStateStore is returned by Resume when no state store is configured. ErrNoStateStore = errors.New("no state store configured")
// ErrPromptNotFound is returned when the specified prompt doesn't exist in the pack. ErrPromptNotFound = errors.New("prompt not found in pack")
// ErrPackNotFound is returned when the pack file doesn't exist. ErrPackNotFound = errors.New("pack file not found")
// ErrProviderNotDetected is returned when no provider could be auto-detected. ErrProviderNotDetected = errors.New("could not detect provider: no API keys found in environment")
// ErrToolNotRegistered is returned when the LLM calls a tool that has no handler. ErrToolNotRegistered = errors.New("tool handler not registered")
// ErrToolNotInPack is returned when trying to register a handler for a tool not in the pack. ErrToolNotInPack = errors.New("tool not defined in pack"))type ChunkType
Section titled “type ChunkType”ChunkType identifies the type of a streaming chunk.
type ChunkType intconst ( // ChunkText indicates the chunk contains text content. ChunkText ChunkType = iota
// ChunkToolCall indicates the chunk contains a tool call. ChunkToolCall
// ChunkMedia indicates the chunk contains media content. ChunkMedia
// ChunkDone indicates streaming is complete. ChunkDone)func (ChunkType) String
Section titled “func (ChunkType) String”func (t ChunkType) String() stringString returns the string representation of the chunk type.
type Conversation
Section titled “type Conversation”Conversation represents an active LLM conversation.
A conversation maintains:
- Connection to the LLM provider
- Message history (context)
- Variable state for template substitution
- Tool handlers for function calling
- Validation state
Conversations are created via Open or Resume and are safe for concurrent use. Each Open call creates an independent conversation with isolated state.
Basic usage:
conv, _ := sdk.Open("./assistant.pack.json", "chat")conv.SetVar("user_name", "Alice")
resp, _ := conv.Send(ctx, "Hello!")fmt.Println(resp.Text())
resp, _ = conv.Send(ctx, "What's my name?") // Remembers contextfmt.Println(resp.Text()) // "Your name is Alice"type Conversation struct { // contains filtered or unexported fields}func Open
Section titled “func Open”func Open(packPath, promptName string, opts ...Option) (*Conversation, error)Open loads a pack file and creates a new conversation for the specified prompt.
This is the primary entry point for SDK v2. It:
- Loads and parses the pack file
- Auto-detects the provider from environment (OPENAI_API_KEY, ANTHROPIC_API_KEY, etc.)
- Configures the runtime pipeline based on pack settings
- Creates an isolated conversation with its own state
Basic usage:
conv, err := sdk.Open("./assistant.pack.json", "chat")if err != nil { log.Fatal(err)}defer conv.Close()
resp, _ := conv.Send(ctx, "Hello!")fmt.Println(resp.Text())With options:
conv, err := sdk.Open("./assistant.pack.json", "chat", sdk.WithModel("gpt-4o"), sdk.WithAPIKey(os.Getenv("MY_KEY")), sdk.WithStateStore(redisStore),)The packPath can be:
- Absolute path: “/path/to/assistant.pack.json”
- Relative path: ”./packs/assistant.pack.json”
- URL: “https://example.com/packs/assistant.pack.json” (future)
The promptName must match a prompt ID defined in the pack’s “prompts” section.
func OpenDuplex
Section titled “func OpenDuplex”func OpenDuplex(packPath, promptName string, opts ...Option) (*Conversation, error)OpenDuplex loads a pack file and creates a new duplex streaming conversation for the specified prompt.
This creates a conversation in duplex mode for bidirectional streaming interactions. Use this when you need real-time streaming input/output with the LLM.
Basic usage:
conv, err := sdk.OpenDuplex("./assistant.pack.json", "chat")if err != nil { log.Fatal(err)}defer conv.Close()
// Send streaming inputgo func() { conv.SendText(ctx, "Hello, ") conv.SendText(ctx, "how are you?")}()
// Receive streaming outputrespCh, _ := conv.Response()for chunk := range respCh { fmt.Print(chunk.Content)}The provider must support streaming input (implement providers.StreamInputSupport). Currently supported providers: Gemini with certain models.
func Resume
Section titled “func Resume”func Resume(conversationID, packPath, promptName string, opts ...Option) (*Conversation, error)Resume loads an existing conversation from state storage.
Use this to continue a conversation that was previously persisted:
store := statestore.NewRedisStore("redis://localhost:6379")conv, err := sdk.Resume("session-123", "./chat.pack.json", "assistant", sdk.WithStateStore(store),)if errors.Is(err, sdk.ErrConversationNotFound) { // Start new conversation conv, _ = sdk.Open("./chat.pack.json", "assistant", sdk.WithStateStore(store), sdk.WithConversationID("session-123"), )}Resume requires a state store to be configured. If no state store is provided, it returns ErrNoStateStore.
func (*Conversation) CheckPending
Section titled “func (*Conversation) CheckPending”func (c *Conversation) CheckPending(name string, args map[string]any) (*sdktools.PendingToolCall, bool)CheckPending checks if a tool call should be pending and creates it if so. Returns (pending call, should wait) - if should wait is true, the tool shouldn’t execute yet.
This method is used internally when processing tool calls from the LLM. It can also be useful for testing HITL workflows:
pending, shouldWait := conv.CheckPending("risky_tool", args)if shouldWait { // Tool requires approval}func (*Conversation) Clear
Section titled “func (*Conversation) Clear”func (c *Conversation) Clear() errorClear removes all messages from the conversation history.
This keeps the system prompt and variables but removes all user/assistant messages. Useful for starting fresh within the same conversation session. In duplex mode, this will close the session first if actively streaming.
func (*Conversation) Close
Section titled “func (*Conversation) Close”func (c *Conversation) Close() errorClose releases resources associated with the conversation.
After Close is called, Send and Stream will return ErrConversationClosed. It’s safe to call Close multiple times.
func (*Conversation) Continue
Section titled “func (*Conversation) Continue”func (c *Conversation) Continue(ctx context.Context) (*Response, error)Continue resumes conversation after resolving pending tools.
Call this after approving/rejecting all pending tools to continue the conversation with the tool results:
resp, _ := conv.Send(ctx, "Process refund")for _, pending := range resp.PendingTools() { conv.ResolveTool(pending.ID)}resp, _ = conv.Continue(ctx) // LLM receives tool resultsfunc (*Conversation) Done
Section titled “func (*Conversation) Done”func (c *Conversation) Done() (<-chan struct{}, error)Done returns a channel that’s closed when the duplex session ends. Only available when the conversation was opened with OpenDuplex().
func (*Conversation) EventBus
Section titled “func (*Conversation) EventBus”func (c *Conversation) EventBus() *events.EventBusEventBus returns the conversation’s event bus for observability.
Use this to subscribe to runtime events like tool calls, validations, and provider requests:
conv.EventBus().Subscribe(events.EventToolCallStarted, func(e *events.Event) { log.Printf("Tool call: %s", e.Data.(*events.ToolCallStartedData).ToolName)})For convenience methods, see the [hooks] package.
func (*Conversation) Fork
Section titled “func (*Conversation) Fork”func (c *Conversation) Fork() *ConversationFork creates a copy of the current conversation state.
Use this to explore alternative conversation branches:
conv.Send(ctx, "I want to plan a trip")conv.Send(ctx, "What cities should I visit?")
// Fork to explore different pathsbranch := conv.Fork()
conv.Send(ctx, "Tell me about Tokyo") // Original pathbranch.Send(ctx, "Tell me about Kyoto") // Branch pathThe forked conversation is completely independent - changes to one do not affect the other.
func (*Conversation) GetVar
Section titled “func (*Conversation) GetVar”func (c *Conversation) GetVar(name string) (string, bool)GetVar returns the current value of a template variable. Returns empty string and false if the variable is not set.
func (*Conversation) ID
Section titled “func (*Conversation) ID”func (c *Conversation) ID() stringID returns the conversation’s unique identifier.
func (*Conversation) Messages
Section titled “func (*Conversation) Messages”func (c *Conversation) Messages(ctx context.Context) []types.MessageMessages returns the conversation history.
The returned slice is a copy - modifying it does not affect the conversation.
func (*Conversation) OnTool
Section titled “func (*Conversation) OnTool”func (c *Conversation) OnTool(name string, handler ToolHandler)OnTool registers a handler for a tool defined in the pack.
The tool name must match a tool defined in the pack’s tools section. When the LLM calls the tool, your handler receives the parsed arguments and returns a result.
conv.OnTool("get_weather", func(args map[string]any) (any, error) { city := args["city"].(string) return weatherAPI.GetCurrent(city)})The handler’s return value is automatically serialized to JSON and sent back to the LLM as the tool result.
func (*Conversation) OnToolAsync
Section titled “func (*Conversation) OnToolAsync”func (c *Conversation) OnToolAsync(name string, checkFunc func(args map[string]any) sdktools.PendingResult, execFunc ToolHandler)OnToolAsync registers a handler that may require approval before execution.
Use this for Human-in-the-Loop (HITL) workflows where certain actions require human approval before proceeding:
conv.OnToolAsync("process_refund", func(args map[string]any) sdk.PendingResult { amount := args["amount"].(float64) if amount > 1000 { return sdk.PendingResult{ Reason: "high_value_refund", Message: fmt.Sprintf("Refund of $%.2f requires approval", amount), } } return sdk.PendingResult{} // Proceed immediately}, func(args map[string]any) (any, error) { // Execute the actual refund return refundAPI.Process(args)})The first function checks if approval is needed, the second executes the action.
func (*Conversation) OnToolCtx
Section titled “func (*Conversation) OnToolCtx”func (c *Conversation) OnToolCtx(name string, handler ToolHandlerCtx)OnToolCtx registers a context-aware handler for a tool.
Use this when your tool implementation needs the request context for cancellation, deadlines, or tracing:
conv.OnToolCtx("search_db", func(ctx context.Context, args map[string]any) (any, error) { return db.SearchWithContext(ctx, args["query"].(string))})func (*Conversation) OnToolExecutor
Section titled “func (*Conversation) OnToolExecutor”func (c *Conversation) OnToolExecutor(name string, executor tools.Executor)OnToolExecutor registers a custom executor for tools.
Use this when you need full control over tool execution or want to use a runtime executor directly:
executor := &MyCustomExecutor{}conv.OnToolExecutor("custom_tool", executor)The executor must implement the runtime/tools.Executor interface.
func (*Conversation) OnToolHTTP
Section titled “func (*Conversation) OnToolHTTP”func (c *Conversation) OnToolHTTP(name string, config *sdktools.HTTPToolConfig)OnToolHTTP registers a tool that makes HTTP requests.
This is a convenience method for tools that call external APIs:
conv.OnToolHTTP("create_ticket", sdktools.NewHTTPToolConfig( "https://api.tickets.example.com/tickets", sdktools.WithMethod("POST"), sdktools.WithHeader("Authorization", "Bearer "+apiKey), sdktools.WithTimeout(5000),))The tool arguments from the LLM are serialized to JSON and sent as the request body. The response is parsed and returned to the LLM.
func (*Conversation) OnTools
Section titled “func (*Conversation) OnTools”func (c *Conversation) OnTools(handlers map[string]ToolHandler)OnTools registers multiple tool handlers at once.
conv.OnTools(map[string]sdk.ToolHandler{ "get_weather": getWeatherHandler, "search_docs": searchDocsHandler, "send_email": sendEmailHandler,})func (*Conversation) PendingTools
Section titled “func (*Conversation) PendingTools”func (c *Conversation) PendingTools() []*sdktools.PendingToolCallPendingTools returns all pending tool calls awaiting approval.
func (*Conversation) RejectTool
Section titled “func (*Conversation) RejectTool”func (c *Conversation) RejectTool(id, reason string) (*sdktools.ToolResolution, error)RejectTool rejects a pending tool call.
Use this when the human reviewer decides not to approve the tool:
resp, _ := conv.RejectTool(pending.ID, "Not authorized for this amount")func (*Conversation) ResolveTool
Section titled “func (*Conversation) ResolveTool”func (c *Conversation) ResolveTool(id string) (*sdktools.ToolResolution, error)ResolveTool approves and executes a pending tool call.
After calling Send() and receiving pending tools in the response, use this to approve and execute them:
resp, _ := conv.Send(ctx, "Process refund for order #12345")if len(resp.PendingTools()) > 0 { pending := resp.PendingTools()[0] // ... get approval ... result, _ := conv.ResolveTool(pending.ID) // Continue the conversation with the result resp, _ = conv.Continue(ctx)}func (*Conversation) Response
Section titled “func (*Conversation) Response”func (c *Conversation) Response() (<-chan providers.StreamChunk, error)Response returns the response channel for duplex streaming. Only available when the conversation was opened with OpenDuplex().
func (*Conversation) Send
Section titled “func (*Conversation) Send”func (c *Conversation) Send(ctx context.Context, message any, opts ...SendOption) (*Response, error)Send sends a message to the LLM and returns the response.
The message can be a simple string or a *types.Message for multimodal content. Variables are substituted into the system prompt template before sending.
Basic usage:
resp, err := conv.Send(ctx, "Hello!")if err != nil { log.Fatal(err)}fmt.Println(resp.Text())With message options:
resp, err := conv.Send(ctx, "What's in this image?", sdk.WithImageFile("/path/to/image.jpg"),)Send automatically:
- Substitutes variables into the system prompt
- Runs any registered validators
- Handles tool calls if tools are defined
- Persists state if a state store is configured
func (*Conversation) SendChunk
Section titled “func (*Conversation) SendChunk”func (c *Conversation) SendChunk(ctx context.Context, chunk *providers.StreamChunk) errorSendChunk sends a streaming chunk in duplex mode. Only available when the conversation was opened with OpenDuplex().
func (*Conversation) SendFrame
Section titled “func (*Conversation) SendFrame”func (c *Conversation) SendFrame(ctx context.Context, frame *session.ImageFrame) errorSendFrame sends an image frame in duplex mode for realtime video scenarios. Only available when the conversation was opened with OpenDuplex().
Example:
frame := &session.ImageFrame{ Data: jpegBytes, MIMEType: "image/jpeg", Timestamp: time.Now(),}conv.SendFrame(ctx, frame)func (*Conversation) SendText
Section titled “func (*Conversation) SendText”func (c *Conversation) SendText(ctx context.Context, text string) errorSendText sends text in duplex mode. Only available when the conversation was opened with OpenDuplex().
func (*Conversation) SendVideoChunk
Section titled “func (*Conversation) SendVideoChunk”func (c *Conversation) SendVideoChunk(ctx context.Context, chunk *session.VideoChunk) errorSendVideoChunk sends a video chunk in duplex mode for encoded video streaming. Only available when the conversation was opened with OpenDuplex().
Example:
chunk := &session.VideoChunk{ Data: h264Data, MIMEType: "video/h264", IsKeyFrame: true, Timestamp: time.Now(),}conv.SendVideoChunk(ctx, chunk)func (*Conversation) SessionError
Section titled “func (*Conversation) SessionError”func (c *Conversation) SessionError() errorSessionError returns any error from the duplex session. Only available when the conversation was opened with OpenDuplex(). Note: This is named SessionError to avoid conflict with the Error interface method.
func (*Conversation) SetVar
Section titled “func (*Conversation) SetVar”func (c *Conversation) SetVar(name, value string)SetVar sets a single template variable.
Variables are substituted into the system prompt template:
conv.SetVar("customer_name", "Alice")// Template: "You are helping {{customer_name}}"// Becomes: "You are helping Alice"func (*Conversation) SetVars
Section titled “func (*Conversation) SetVars”func (c *Conversation) SetVars(vars map[string]any)SetVars sets multiple template variables at once.
conv.SetVars(map[string]any{ "customer_name": "Alice", "customer_tier": "premium", "max_discount": 20,})func (*Conversation) SetVarsFromEnv
Section titled “func (*Conversation) SetVarsFromEnv”func (c *Conversation) SetVarsFromEnv(prefix string)SetVarsFromEnv sets variables from environment variables with a given prefix.
Environment variables matching the prefix are added as template variables with the prefix stripped and converted to lowercase:
// If PROMPTKIT_CUSTOMER_NAME=Alice is set:conv.SetVarsFromEnv("PROMPTKIT_")// Sets variable "customer_name" = "Alice"func (*Conversation) Stream
Section titled “func (*Conversation) Stream”func (c *Conversation) Stream(ctx context.Context, message any, opts ...SendOption) <-chan StreamChunkStream sends a message and returns a channel of response chunks.
Use this for real-time streaming of LLM responses:
for chunk := range conv.Stream(ctx, "Tell me a story") { if chunk.Error != nil { log.Printf("Error: %v", chunk.Error) break } fmt.Print(chunk.Text)}The channel is closed when the response is complete or an error occurs. The final chunk (Type == ChunkDone) contains the complete Response.
func (*Conversation) StreamRaw
Section titled “func (*Conversation) StreamRaw”func (c *Conversation) StreamRaw(ctx context.Context, message any) (<-chan streamPkg.Chunk, error)StreamRaw returns a channel of streaming chunks for use with the stream package. This is a lower-level API that returns stream.Chunk types.
Most users should use Conversation.Stream instead. StreamRaw is useful when working with [stream.Process] or [stream.CollectText].
err := stream.Process(ctx, conv, "Hello", func(chunk stream.Chunk) error { fmt.Print(chunk.Text) return nil})func (*Conversation) ToolRegistry
Section titled “func (*Conversation) ToolRegistry”func (c *Conversation) ToolRegistry() *tools.RegistryToolRegistry returns the underlying tool registry.
This is a power-user method for direct registry access. Tool descriptors are loaded from the pack; this allows inspecting them or registering custom executors.
registry := conv.ToolRegistry().(*tools.Registry)for _, desc := range registry.Descriptors() { fmt.Printf("Tool: %s\n", desc.Name)}func (*Conversation) TriggerStart
Section titled “func (*Conversation) TriggerStart”func (c *Conversation) TriggerStart(ctx context.Context, message string) errorTriggerStart sends a text message to make the model initiate the conversation. Use this in ASM mode when you want the model to speak first (e.g., introducing itself). Only available when the conversation was opened with OpenDuplex().
Example:
conv, _ := sdk.OpenDuplex("./assistant.pack.json", "interviewer", ...)// Start processing responses firstgo processResponses(conv.Response())// Trigger the model to beginconv.TriggerStart(ctx, "Please introduce yourself and begin the interview.")type CredentialOption
Section titled “type CredentialOption”CredentialOption configures credentials for a provider.
type CredentialOption interface { // contains filtered or unexported methods}func WithCredentialAPIKey
Section titled “func WithCredentialAPIKey”func WithCredentialAPIKey(key string) CredentialOptionWithCredentialAPIKey sets an explicit API key.
func WithCredentialEnv
Section titled “func WithCredentialEnv”func WithCredentialEnv(envVar string) CredentialOptionWithCredentialEnv sets an environment variable name for the credential.
func WithCredentialFile
Section titled “func WithCredentialFile”func WithCredentialFile(path string) CredentialOptionWithCredentialFile sets a credential file path.
type MCPServerBuilder
Section titled “type MCPServerBuilder”MCPServerBuilder provides a fluent interface for configuring MCP servers.
type MCPServerBuilder struct { // contains filtered or unexported fields}func NewMCPServer
Section titled “func NewMCPServer”func NewMCPServer(name, command string, args ...string) *MCPServerBuilderNewMCPServer creates a new MCP server configuration builder.
server := sdk.NewMCPServer("github", "npx", "@modelcontextprotocol/server-github"). WithEnv("GITHUB_TOKEN", os.Getenv("GITHUB_TOKEN"))
conv, _ := sdk.Open("./assistant.pack.json", "assistant", sdk.WithMCPServer(server),)func (*MCPServerBuilder) Build
Section titled “func (*MCPServerBuilder) Build”func (b *MCPServerBuilder) Build() mcp.ServerConfigBuild returns the configured server config.
func (*MCPServerBuilder) WithArgs
Section titled “func (*MCPServerBuilder) WithArgs”func (b *MCPServerBuilder) WithArgs(args ...string) *MCPServerBuilderWithArgs appends additional arguments to the MCP server command.
func (*MCPServerBuilder) WithEnv
Section titled “func (*MCPServerBuilder) WithEnv”func (b *MCPServerBuilder) WithEnv(key, value string) *MCPServerBuilderWithEnv adds an environment variable to the MCP server.
type Option
Section titled “type Option”Option configures a Conversation.
type Option func(*config) errorfunc WithAPIKey
Section titled “func WithAPIKey”func WithAPIKey(key string) OptionWithAPIKey provides an explicit API key instead of reading from environment.
conv, _ := sdk.Open("./chat.pack.json", "assistant", sdk.WithAPIKey(os.Getenv("MY_CUSTOM_KEY")),)func WithAutoResize
Section titled “func WithAutoResize”func WithAutoResize(maxWidth, maxHeight int) OptionWithAutoResize is a convenience option that enables image resizing with the specified dimensions. Use this for simple cases; use WithImagePreprocessing for full control.
Example:
conv, _ := sdk.Open("./chat.pack.json", "vision-assistant", sdk.WithAutoResize(1024, 1024), // Max 1024x1024)func WithAzure
Section titled “func WithAzure”func WithAzure(endpoint string, opts ...PlatformOption) OptionWithAzure configures Azure AI services as the hosting platform. This uses the Azure SDK default credential chain (Managed Identity, Azure CLI, etc.).
conv, _ := sdk.Open("./chat.pack.json", "assistant", sdk.WithAzure("https://my-resource.openai.azure.com"),)func WithBedrock
Section titled “func WithBedrock”func WithBedrock(region string, opts ...PlatformOption) OptionWithBedrock configures AWS Bedrock as the hosting platform. This uses the AWS SDK default credential chain (IRSA, instance profile, env vars).
conv, _ := sdk.Open("./chat.pack.json", "assistant", sdk.WithBedrock("us-west-2"),)func WithConversationID
Section titled “func WithConversationID”func WithConversationID(id string) OptionWithConversationID sets the conversation identifier.
If not set, a unique ID is auto-generated. Set this when you want to use a specific ID for state persistence or tracking.
conv, _ := sdk.Open("./chat.pack.json", "assistant", sdk.WithStateStore(store), sdk.WithConversationID("user-123-session-456"),)func WithDisabledValidators
Section titled “func WithDisabledValidators”func WithDisabledValidators(names ...string) OptionWithDisabledValidators disables specific validators by name.
conv, _ := sdk.Open("./chat.pack.json", "assistant", sdk.WithDisabledValidators("max_length", "banned_words"),)func WithEventBus
Section titled “func WithEventBus”func WithEventBus(bus *events.EventBus) OptionWithEventBus provides a shared event bus for observability.
When set, the conversation emits events to this bus. Use this to share an event bus across multiple conversations for centralized logging, metrics, or debugging.
bus := events.NewEventBus()bus.SubscribeAll(myMetricsCollector)
conv1, _ := sdk.Open("./chat.pack.json", "assistant", sdk.WithEventBus(bus))conv2, _ := sdk.Open("./chat.pack.json", "assistant", sdk.WithEventBus(bus))func WithEventStore
Section titled “func WithEventStore”func WithEventStore(store events.EventStore) OptionWithEventStore configures event persistence for session recording.
When set, all events published through the conversation’s event bus are automatically persisted to the store. This enables session replay and analysis.
The event store is automatically attached to the event bus. If no event bus is provided via WithEventBus, a new one is created internally.
Example with file-based storage:
store, _ := events.NewFileEventStore("/var/log/sessions")defer store.Close()
conv, _ := sdk.Open("./chat.pack.json", "assistant", sdk.WithEventStore(store),)Example with shared bus and store:
store, _ := events.NewFileEventStore("/var/log/sessions")bus := events.NewEventBus().WithStore(store)
conv, _ := sdk.Open("./chat.pack.json", "assistant", sdk.WithEventBus(bus),)func WithImagePreprocessing
Section titled “func WithImagePreprocessing”func WithImagePreprocessing(cfg *stage.ImagePreprocessConfig) OptionWithImagePreprocessing enables automatic image preprocessing before sending to the LLM. This resizes large images to fit within provider limits, reducing token usage and preventing errors.
The default configuration resizes images to max 1024x1024 with 85% quality.
Example with defaults:
conv, _ := sdk.Open("./chat.pack.json", "vision-assistant", sdk.WithImagePreprocessing(nil), // Use default settings)Example with custom config:
conv, _ := sdk.Open("./chat.pack.json", "vision-assistant", sdk.WithImagePreprocessing(&stage.ImagePreprocessConfig{ Resize: stage.ImageResizeStageConfig{ MaxWidth: 2048, MaxHeight: 2048, Quality: 90, }, EnableResize: true, }),)func WithJSONMode
Section titled “func WithJSONMode”func WithJSONMode() OptionWithJSONMode is a convenience option that enables simple JSON output mode. The model will return valid JSON objects but without schema enforcement. Use WithResponseFormat for more control including schema validation.
Example:
conv, _ := sdk.Open("./chat.pack.json", "assistant", sdk.WithJSONMode(),)resp, _ := conv.Send(ctx, "List 3 colors as JSON")// Response: {"colors": ["red", "green", "blue"]}func WithMCP
Section titled “func WithMCP”func WithMCP(name, command string, args ...string) OptionWithMCP adds an MCP (Model Context Protocol) server for tool execution.
MCP servers provide external tools that can be called by the LLM. The server is started automatically when the conversation opens and stopped when the conversation is closed.
Basic usage:
conv, _ := sdk.Open("./assistant.pack.json", "assistant", sdk.WithMCP("filesystem", "npx", "@modelcontextprotocol/server-filesystem", "/path"),)With environment variables:
conv, _ := sdk.Open("./assistant.pack.json", "assistant", sdk.WithMCP("github", "npx", "@modelcontextprotocol/server-github"). WithEnv("GITHUB_TOKEN", os.Getenv("GITHUB_TOKEN")),)Multiple servers:
conv, _ := sdk.Open("./assistant.pack.json", "assistant", sdk.WithMCP("filesystem", "npx", "@modelcontextprotocol/server-filesystem", "/path"), sdk.WithMCP("memory", "npx", "@modelcontextprotocol/server-memory"),)func WithMCPServer
Section titled “func WithMCPServer”func WithMCPServer(builder *MCPServerBuilder) OptionWithMCPServer adds a pre-configured MCP server.
server := sdk.NewMCPServer("github", "npx", "@modelcontextprotocol/server-github"). WithEnv("GITHUB_TOKEN", os.Getenv("GITHUB_TOKEN"))
conv, _ := sdk.Open("./assistant.pack.json", "assistant", sdk.WithMCPServer(server),)func WithModel
Section titled “func WithModel”func WithModel(model string) OptionWithModel overrides the default model specified in the pack.
conv, _ := sdk.Open("./chat.pack.json", "assistant", sdk.WithModel("gpt-4o"),)func WithProvider
Section titled “func WithProvider”func WithProvider(p providers.Provider) OptionWithProvider uses a custom provider instance.
This bypasses auto-detection and uses the provided provider directly. Use this for custom provider implementations or when you need full control over provider configuration.
provider := openai.NewProvider(openai.Config{...})conv, _ := sdk.Open("./chat.pack.json", "assistant", sdk.WithProvider(provider),)func WithRelevanceTruncation
Section titled “func WithRelevanceTruncation”func WithRelevanceTruncation(cfg *RelevanceConfig) OptionWithRelevanceTruncation configures embedding-based relevance truncation.
This automatically sets the truncation strategy to “relevance” and configures the embedding provider for semantic similarity scoring.
Example with OpenAI embeddings:
embProvider, _ := openai.NewEmbeddingProvider()conv, _ := sdk.Open("./chat.pack.json", "assistant", sdk.WithTokenBudget(8000), sdk.WithRelevanceTruncation(&sdk.RelevanceConfig{ EmbeddingProvider: embProvider, MinRecentMessages: 3, SimilarityThreshold: 0.3, }),)Example with Gemini embeddings:
embProvider, _ := gemini.NewEmbeddingProvider()conv, _ := sdk.Open("./chat.pack.json", "assistant", sdk.WithTokenBudget(8000), sdk.WithRelevanceTruncation(&sdk.RelevanceConfig{ EmbeddingProvider: embProvider, }),)func WithResponseFormat
Section titled “func WithResponseFormat”func WithResponseFormat(format *providers.ResponseFormat) OptionWithResponseFormat configures the LLM response format for JSON mode output. This instructs the model to return responses in the specified format.
For simple JSON object output:
conv, _ := sdk.Open("./chat.pack.json", "assistant", sdk.WithResponseFormat(&providers.ResponseFormat{ Type: providers.ResponseFormatJSON, }),)For structured JSON output with a schema:
schema := json.RawMessage(`{"type":"object","properties":{"name":{"type":"string"}}}`)conv, _ := sdk.Open("./chat.pack.json", "assistant", sdk.WithResponseFormat(&providers.ResponseFormat{ Type: providers.ResponseFormatJSONSchema, JSONSchema: schema, SchemaName: "person", Strict: true, }),)func WithSkipSchemaValidation
Section titled “func WithSkipSchemaValidation”func WithSkipSchemaValidation() OptionWithSkipSchemaValidation disables JSON schema validation during pack loading.
By default, packs are validated against the PromptPack JSON schema to ensure they are well-formed. Use this option to skip validation, for example when loading legacy packs or during development.
conv, _ := sdk.Open("./legacy.pack.json", "assistant", sdk.WithSkipSchemaValidation(),)func WithStateStore
Section titled “func WithStateStore”func WithStateStore(store statestore.Store) OptionWithStateStore configures persistent state storage.
When configured, conversation state (messages, metadata) is automatically persisted after each turn and can be resumed later via Resume.
store := statestore.NewRedisStore("redis://localhost:6379")conv, _ := sdk.Open("./chat.pack.json", "assistant", sdk.WithStateStore(store),)func WithStreamingConfig
Section titled “func WithStreamingConfig”func WithStreamingConfig(streamingConfig *providers.StreamingInputConfig) OptionWithStreamingConfig configures streaming for duplex mode. When set, enables ASM (Audio Streaming Model) mode with continuous bidirectional streaming. When nil (default), uses VAD (Voice Activity Detection) mode with turn-based streaming.
ASM mode is for models with native bidirectional audio support (e.g., gemini-2.0-flash-exp). VAD mode is for standard text-based models with audio transcription.
Example for ASM mode:
conv, _ := sdk.OpenDuplex("./assistant.pack.json", "voice-chat", sdk.WithStreamingConfig(&providers.StreamingInputConfig{ Type: types.ContentTypeAudio, SampleRate: 16000, Channels: 1, }),)func WithStreamingVideo
Section titled “func WithStreamingVideo”func WithStreamingVideo(cfg *VideoStreamConfig) OptionWithStreamingVideo enables realtime video/image streaming for duplex sessions. This is used for webcam feeds, screen sharing, and continuous frame analysis.
The FrameRateLimitStage is added to the pipeline when TargetFPS > 0, dropping frames to maintain the target frame rate for LLM processing.
Example with defaults (1 FPS):
session, _ := sdk.OpenDuplex("./assistant.pack.json", "vision-chat", sdk.WithStreamingVideo(nil), // Use default settings)Example with custom config:
session, _ := sdk.OpenDuplex("./assistant.pack.json", "vision-chat", sdk.WithStreamingVideo(&sdk.VideoStreamConfig{ TargetFPS: 2.0, // 2 frames per second MaxWidth: 1280, // Resize large frames MaxHeight: 720, Quality: 80, }),)Sending frames:
for frame := range webcam.Frames() { session.SendFrame(ctx, &session.ImageFrame{ Data: frame.JPEG(), MIMEType: "image/jpeg", Timestamp: time.Now(), })}func WithStrictValidation
Section titled “func WithStrictValidation”func WithStrictValidation() OptionWithStrictValidation makes all validators fail on violation.
Normally, validators respect their fail_on_violation setting from the pack. With strict validation, all validators will cause errors on failure.
conv, _ := sdk.Open("./chat.pack.json", "assistant", sdk.WithStrictValidation(),)func WithTTS
Section titled “func WithTTS”func WithTTS(service tts.Service) OptionWithTTS configures text-to-speech for the Pipeline.
TTS is applied via Pipeline middleware during streaming responses.
conv, _ := sdk.Open("./assistant.pack.json", "voice", sdk.WithTTS(tts.NewOpenAI(os.Getenv("OPENAI_API_KEY"))),)func WithTokenBudget
Section titled “func WithTokenBudget”func WithTokenBudget(tokens int) OptionWithTokenBudget sets the maximum tokens for context (prompt + history).
When the conversation history exceeds this budget, older messages are truncated according to the truncation strategy.
conv, _ := sdk.Open("./chat.pack.json", "assistant", sdk.WithTokenBudget(8000),)func WithToolRegistry
Section titled “func WithToolRegistry”func WithToolRegistry(registry *tools.Registry) OptionWithToolRegistry provides a pre-configured tool registry.
This is a power-user option for scenarios requiring direct registry access. Tool descriptors are still loaded from the pack; this allows providing custom executors or middleware.
registry := tools.NewRegistry()registry.RegisterExecutor(&myCustomExecutor{})conv, _ := sdk.Open("./chat.pack.json", "assistant", sdk.WithToolRegistry(registry),)func WithTruncation
Section titled “func WithTruncation”func WithTruncation(strategy string) OptionWithTruncation sets the truncation strategy for context management.
Strategies:
-
“sliding”: Remove oldest messages first (default)
-
“summarize”: Summarize old messages before removing
-
“relevance”: Remove least relevant messages based on embedding similarity
conv, _ := sdk.Open(”./chat.pack.json”, “assistant”, sdk.WithTokenBudget(8000), sdk.WithTruncation(“summarize”), )
func WithTurnDetector
Section titled “func WithTurnDetector”func WithTurnDetector(detector audio.TurnDetector) OptionWithTurnDetector configures turn detection for the Pipeline.
Turn detectors determine when a user has finished speaking in audio sessions.
conv, _ := sdk.Open("./assistant.pack.json", "voice", sdk.WithTurnDetector(audio.NewSilenceDetector(500 * time.Millisecond)),)func WithVADMode
Section titled “func WithVADMode”func WithVADMode(sttService stt.Service, ttsService tts.Service, cfg *VADModeConfig) OptionWithVADMode configures VAD mode for voice conversations with standard text-based LLMs. VAD mode processes audio through a pipeline: Audio → VAD → STT → LLM → TTS → Audio
This is an alternative to ASM mode (WithStreamingConfig) for providers without native audio streaming support.
Example:
sttService := stt.NewOpenAI(os.Getenv("OPENAI_API_KEY"))ttsService := tts.NewOpenAI(os.Getenv("OPENAI_API_KEY"))
conv, _ := sdk.OpenDuplex("./assistant.pack.json", "voice-chat", sdk.WithProvider(openai.NewProvider(openai.Config{...})), sdk.WithVADMode(sttService, ttsService, nil), // nil uses defaults)With custom config:
conv, _ := sdk.OpenDuplex("./assistant.pack.json", "voice-chat", sdk.WithProvider(openai.NewProvider(openai.Config{...})), sdk.WithVADMode(sttService, ttsService, &sdk.VADModeConfig{ SilenceDuration: 500 * time.Millisecond, Voice: "nova", }),)func WithValidationMode
Section titled “func WithValidationMode”func WithValidationMode(mode ValidationMode) OptionWithValidationMode sets how validation failures are handled.
// Suppress validation errors (useful for testing)conv, _ := sdk.Open("./chat.pack.json", "assistant", sdk.WithValidationMode(sdk.ValidationModeWarn),)func WithVariableProvider
Section titled “func WithVariableProvider”func WithVariableProvider(p variables.Provider) OptionWithVariableProvider adds a variable provider for dynamic variable resolution.
Variables are resolved before each Send() and merged with static variables. Later providers in the chain override earlier ones with the same key.
conv, _ := sdk.Open("./assistant.pack.json", "support", sdk.WithVariableProvider(variables.Time()), sdk.WithVariableProvider(variables.State()),)func WithVariables
Section titled “func WithVariables”func WithVariables(vars map[string]string) OptionWithVariables sets initial variables for template substitution.
These variables are available immediately when the conversation opens, before any messages are sent. Use this for variables that must be set before the first LLM call (e.g., in streaming/ASM mode).
Variables set here override prompt defaults but can be further modified via conv.SetVar() for subsequent messages.
conv, _ := sdk.Open("./assistant.pack.json", "assistant", sdk.WithVariables(map[string]string{ "user_name": "Alice", "language": "en", }),)func WithVertex
Section titled “func WithVertex”func WithVertex(region, project string, opts ...PlatformOption) OptionWithVertex configures Google Cloud Vertex AI as the hosting platform. This uses Application Default Credentials (Workload Identity, gcloud auth, etc.).
conv, _ := sdk.Open("./chat.pack.json", "assistant", sdk.WithVertex("us-central1", "my-project"),)type PackError
Section titled “type PackError”PackError represents an error loading or parsing a pack file.
type PackError struct { // Path is the pack file path. Path string
// Cause is the underlying error. Cause error}func (*PackError) Error
Section titled “func (*PackError) Error”func (e *PackError) Error() stringError implements the error interface.
func (*PackError) Unwrap
Section titled “func (*PackError) Unwrap”func (e *PackError) Unwrap() errorUnwrap returns the underlying error.
type PendingTool
Section titled “type PendingTool”PendingTool represents a tool call that requires external approval.
type PendingTool struct { // Unique identifier for this pending call ID string
// Tool name Name string
// Arguments passed to the tool Arguments map[string]any
// Reason the tool requires approval Reason string
// Human-readable message about why approval is needed Message string}type PlatformOption
Section titled “type PlatformOption”PlatformOption configures a platform for a provider.
type PlatformOption interface { // contains filtered or unexported methods}func WithPlatformEndpoint
Section titled “func WithPlatformEndpoint”func WithPlatformEndpoint(endpoint string) PlatformOptionWithPlatformEndpoint sets a custom endpoint URL.
func WithPlatformProject
Section titled “func WithPlatformProject”func WithPlatformProject(project string) PlatformOptionWithPlatformProject sets the cloud project (for Vertex).
func WithPlatformRegion
Section titled “func WithPlatformRegion”func WithPlatformRegion(region string) PlatformOptionWithPlatformRegion sets the cloud region.
type ProviderError
Section titled “type ProviderError”ProviderError represents an error from the LLM provider.
type ProviderError struct { // Provider name (e.g., "openai", "anthropic"). Provider string
// StatusCode is the HTTP status code if available. StatusCode int
// Message is the error message from the provider. Message string
// Cause is the underlying error. Cause error}func (*ProviderError) Error
Section titled “func (*ProviderError) Error”func (e *ProviderError) Error() stringError implements the error interface.
func (*ProviderError) Unwrap
Section titled “func (*ProviderError) Unwrap”func (e *ProviderError) Unwrap() errorUnwrap returns the underlying error.
type RelevanceConfig
Section titled “type RelevanceConfig”RelevanceConfig configures embedding-based relevance truncation. Used when truncation strategy is “relevance”.
type RelevanceConfig struct { // EmbeddingProvider generates embeddings for similarity scoring. // Required for relevance-based truncation. EmbeddingProvider providers.EmbeddingProvider
// MinRecentMessages always keeps the N most recent messages regardless of relevance. // Default: 3 MinRecentMessages int
// AlwaysKeepSystemRole keeps all system role messages regardless of score. // Default: true AlwaysKeepSystemRole bool
// SimilarityThreshold is the minimum score (0.0-1.0) to consider a message relevant. // Messages below this threshold are dropped first. Default: 0.0 (no threshold) SimilarityThreshold float64
// QuerySource determines what text to compare messages against. // Values: "last_user" (default), "last_n", "custom" QuerySource string
// LastNCount is the number of messages to use when QuerySource is "last_n". // Default: 3 LastNCount int
// CustomQuery is the query text when QuerySource is "custom". CustomQuery string}type Response
Section titled “type Response”Response represents the result of a conversation turn.
Response wraps the assistant’s message with convenience methods and additional metadata like timing and validation results.
Basic usage:
resp, _ := conv.Send(ctx, "Hello!")fmt.Println(resp.Text()) // Text contentfmt.Println(resp.TokensUsed()) // Total tokensfmt.Println(resp.Cost()) // Total cost in USDFor multimodal responses:
if resp.HasMedia() { for _, part := range resp.Parts() { if part.Media != nil { fmt.Printf("Media: %s\n", part.Media.URL) } }}type Response struct { // contains filtered or unexported fields}func (*Response) Cost
Section titled “func (*Response) Cost”func (r *Response) Cost() float64Cost returns the total cost in USD for this response.
func (*Response) Duration
Section titled “func (*Response) Duration”func (r *Response) Duration() time.DurationDuration returns how long the request took.
func (*Response) HasMedia
Section titled “func (*Response) HasMedia”func (r *Response) HasMedia() boolHasMedia returns true if the response contains any media content.
func (*Response) HasToolCalls
Section titled “func (*Response) HasToolCalls”func (r *Response) HasToolCalls() boolHasToolCalls returns true if the response contains tool calls.
func (*Response) InputTokens
Section titled “func (*Response) InputTokens”func (r *Response) InputTokens() intInputTokens returns the number of input (prompt) tokens used.
func (*Response) Message
Section titled “func (*Response) Message”func (r *Response) Message() *types.MessageMessage returns the underlying runtime Message.
Use this when you need direct access to the message structure, such as for serialization or passing to other runtime components.
func (*Response) OutputTokens
Section titled “func (*Response) OutputTokens”func (r *Response) OutputTokens() intOutputTokens returns the number of output (completion) tokens used.
func (*Response) Parts
Section titled “func (*Response) Parts”func (r *Response) Parts() []types.ContentPartParts returns all content parts in the response.
Use this for multimodal responses that may contain text, images, audio, or other content types.
func (*Response) PendingTools
Section titled “func (*Response) PendingTools”func (r *Response) PendingTools() []PendingToolPendingTools returns tools that are awaiting external approval.
This is used for Human-in-the-Loop (HITL) workflows where certain tools require approval before execution.
func (*Response) Text
Section titled “func (*Response) Text”func (r *Response) Text() stringText returns the text content of the response.
This is a convenience method that extracts all text parts and joins them. For responses with only text content, this returns the full response. For multimodal responses, use Response.Parts to access all content.
func (*Response) TokensUsed
Section titled “func (*Response) TokensUsed”func (r *Response) TokensUsed() intTokensUsed returns the total number of tokens used (input + output).
func (*Response) ToolCalls
Section titled “func (*Response) ToolCalls”func (r *Response) ToolCalls() []types.MessageToolCallToolCalls returns the tool calls made during this turn.
Tool calls are requests from the LLM to execute functions. If you have registered handlers via Conversation.OnTool, they will be executed automatically and the results sent back to the LLM.
func (*Response) Validations
Section titled “func (*Response) Validations”func (r *Response) Validations() []types.ValidationResultValidations returns the results of all validators that ran.
Validators are defined in the pack and run automatically on responses. Check this to see which validators passed or failed.
type SendOption
Section titled “type SendOption”SendOption configures a single Send call.
type SendOption func(*sendConfig) errorfunc WithAudioData
Section titled “func WithAudioData”func WithAudioData(data []byte, mimeType string) SendOptionWithAudioData attaches audio from raw bytes.
resp, _ := conv.Send(ctx, "Transcribe this audio", sdk.WithAudioData(audioBytes, "audio/mp3"),)func WithAudioFile
Section titled “func WithAudioFile”func WithAudioFile(path string) SendOptionWithAudioFile attaches audio from a file path.
resp, _ := conv.Send(ctx, "Transcribe this audio", sdk.WithAudioFile("/path/to/audio.mp3"),)func WithDocumentData
Section titled “func WithDocumentData”func WithDocumentData(data []byte, mimeType string) SendOptionWithDocumentData attaches a document from raw data with the specified MIME type.
resp, _ := conv.Send(ctx, "Review this PDF", sdk.WithDocumentData(pdfBytes, types.MIMETypePDF),)func WithDocumentFile
Section titled “func WithDocumentFile”func WithDocumentFile(path string) SendOptionWithDocumentFile attaches a document from a file path (PDF, Word, markdown, etc.).
resp, _ := conv.Send(ctx, "Analyze this document", sdk.WithDocumentFile("contract.pdf"),)func WithFile
Section titled “func WithFile”func WithFile(name string, data []byte) SendOptionWithFile attaches a file with the given name and content.
Deprecated: Use WithDocumentFile or WithDocumentData instead for proper document handling. This function is kept for backward compatibility but should not be used for new code as it cannot properly handle binary files.
resp, _ := conv.Send(ctx, "Analyze this data", sdk.WithFile("data.csv", csvBytes),)func WithImageData
Section titled “func WithImageData”func WithImageData(data []byte, mimeType string, detail ...*string) SendOptionWithImageData attaches an image from raw bytes.
resp, _ := conv.Send(ctx, "What's in this image?", sdk.WithImageData(imageBytes, "image/png"),)func WithImageFile
Section titled “func WithImageFile”func WithImageFile(path string, detail ...*string) SendOptionWithImageFile attaches an image from a file path.
resp, _ := conv.Send(ctx, "What's in this image?", sdk.WithImageFile("/path/to/image.jpg"),)func WithImageURL
Section titled “func WithImageURL”func WithImageURL(url string, detail ...*string) SendOptionWithImageURL attaches an image from a URL.
resp, _ := conv.Send(ctx, "What's in this image?", sdk.WithImageURL("https://example.com/photo.jpg"),)func WithVideoData
Section titled “func WithVideoData”func WithVideoData(data []byte, mimeType string) SendOptionWithVideoData attaches a video from raw bytes.
resp, _ := conv.Send(ctx, "Describe this video", sdk.WithVideoData(videoBytes, "video/mp4"),)func WithVideoFile
Section titled “func WithVideoFile”func WithVideoFile(path string) SendOptionWithVideoFile attaches a video from a file path.
resp, _ := conv.Send(ctx, "Describe this video", sdk.WithVideoFile("/path/to/video.mp4"),)type SessionMode
Section titled “type SessionMode”SessionMode represents the conversation’s session mode.
type SessionMode intconst ( // UnaryMode for request/response conversations. UnaryMode SessionMode = iota // DuplexMode for bidirectional streaming conversations. DuplexMode)type StreamChunk
Section titled “type StreamChunk”StreamChunk represents a single chunk in a streaming response.
type StreamChunk struct { // Type of this chunk Type ChunkType
// Text content (for ChunkText type) Text string
// Tool call (for ChunkToolCall type) ToolCall *types.MessageToolCall
// Media content (for ChunkMedia type) Media *types.MediaContent
// Complete response (for ChunkDone type) Message *Response
// Error (if any occurred) Error error}type ToolError
Section titled “type ToolError”ToolError represents an error executing a tool.
type ToolError struct { // ToolName is the name of the tool that failed. ToolName string
// Cause is the underlying error from the tool handler. Cause error}func (*ToolError) Error
Section titled “func (*ToolError) Error”func (e *ToolError) Error() stringError implements the error interface.
func (*ToolError) Unwrap
Section titled “func (*ToolError) Unwrap”func (e *ToolError) Unwrap() errorUnwrap returns the underlying error.
type ToolHandler
Section titled “type ToolHandler”ToolHandler is a function that executes a tool call. It receives the parsed arguments from the LLM and returns a result.
The args map contains the arguments as specified in the tool’s schema. The return value should be JSON-serializable.
conv.OnTool("get_weather", func(args map[string]any) (any, error) { city := args["city"].(string) return weatherAPI.GetCurrent(city)})type ToolHandler func(args map[string]any) (any, error)type ToolHandlerCtx
Section titled “type ToolHandlerCtx”ToolHandlerCtx is like ToolHandler but receives a context. Use this when your tool implementation needs context for cancellation or deadlines.
conv.OnToolCtx("search_db", func(ctx context.Context, args map[string]any) (any, error) { return db.SearchWithContext(ctx, args["query"].(string))})type ToolHandlerCtx func(ctx context.Context, args map[string]any) (any, error)type VADModeConfig
Section titled “type VADModeConfig”VADModeConfig configures VAD (Voice Activity Detection) mode for voice conversations. In VAD mode, the pipeline processes audio through: AudioTurnStage → STTStage → ProviderStage → TTSStage
This enables voice conversations using standard text-based LLMs.
type VADModeConfig struct { // SilenceDuration is how long silence must persist to trigger turn complete. // Default: 800ms SilenceDuration time.Duration
// MinSpeechDuration is minimum speech before turn can complete. // Default: 200ms MinSpeechDuration time.Duration
// MaxTurnDuration is maximum turn length before forcing completion. // Default: 30s MaxTurnDuration time.Duration
// SampleRate is the audio sample rate. // Default: 16000 SampleRate int
// Language is the language hint for STT (e.g., "en", "es"). // Default: "en" Language string
// Voice is the TTS voice to use. // Default: "alloy" Voice string
// Speed is the TTS speech rate (0.5-2.0). // Default: 1.0 Speed float64}func DefaultVADModeConfig
Section titled “func DefaultVADModeConfig”func DefaultVADModeConfig() *VADModeConfigDefaultVADModeConfig returns sensible defaults for VAD mode.
type ValidationError
Section titled “type ValidationError”ValidationError represents a validation failure.
type ValidationError struct { // ValidatorType is the type of validator that failed (e.g., "banned_words"). ValidatorType string
// Message describes what validation rule was violated. Message string
// Details contains validator-specific information about the failure. Details map[string]any}func AsValidationError
Section titled “func AsValidationError”func AsValidationError(err error) (*ValidationError, bool)AsValidationError checks if an error is a ValidationError and returns it.
resp, err := conv.Send(ctx, message)if err != nil { if vErr, ok := sdk.AsValidationError(err); ok { fmt.Printf("Validation failed: %s\n", vErr.ValidatorType) }}func (*ValidationError) Error
Section titled “func (*ValidationError) Error”func (e *ValidationError) Error() stringError implements the error interface.
type ValidationMode
Section titled “type ValidationMode”ValidationMode controls how validation failures are handled.
type ValidationMode intconst ( // ValidationModeError causes validation failures to return errors (default). ValidationModeError ValidationMode = iota
// ValidationModeWarn logs validation failures but doesn't return errors. ValidationModeWarn
// ValidationModeDisabled skips validation entirely. ValidationModeDisabled)type VideoStreamConfig
Section titled “type VideoStreamConfig”VideoStreamConfig configures realtime video/image streaming for duplex sessions. This enables webcam feeds, screen sharing, and continuous frame analysis.
type VideoStreamConfig struct { // TargetFPS is the target frame rate for streaming. // Frames exceeding this rate will be dropped. // Default: 1.0 (one frame per second, suitable for most LLM vision scenarios) TargetFPS float64
// MaxWidth is the maximum frame width in pixels. // Frames larger than this are resized. 0 means no limit. // Default: 0 (no resizing) MaxWidth int
// MaxHeight is the maximum frame height in pixels. // Frames larger than this are resized. 0 means no limit. // Default: 0 (no resizing) MaxHeight int
// Quality is the JPEG compression quality (1-100) for frame encoding. // Higher values = better quality, larger size. // Default: 85 Quality int
// EnableResize enables automatic frame resizing when dimensions exceed limits. // Default: true (resizing enabled when MaxWidth/MaxHeight are set) EnableResize bool}func DefaultVideoStreamConfig
Section titled “func DefaultVideoStreamConfig”func DefaultVideoStreamConfig() *VideoStreamConfigDefaultVideoStreamConfig returns sensible defaults for video streaming.
Generated by gomarkdoc