SDK API Reference
import "github.com/AltairaLabs/PromptKit/sdk"Package sdk provides a simple API for LLM conversations using PromptPack files.
SDK v2 is a pack-first SDK where everything starts from a .pack.json file. The pack contains prompts, variables, tools, validators, and model configuration. The SDK loads the pack and provides a minimal API to interact with LLMs.
Quick Start
Section titled “Quick Start”The simplest possible usage is just 5 lines:
conv, err := sdk.Open("./assistant.pack.json", "chat")if err != nil { log.Fatal(err)}defer conv.Close()
resp, _ := conv.Send(ctx, "Hello!")fmt.Println(resp.Text())Core Concepts
Section titled “Core Concepts”Opening a Conversation:
Use Open to load a pack file and create a conversation for a specific prompt:
// Minimal - provider auto-detected from environmentconv, _ := sdk.Open("./demo.pack.json", "troubleshooting")
// With options - override model, provider, etc.conv, _ := sdk.Open("./demo.pack.json", "troubleshooting", sdk.WithModel("gpt-4o"), sdk.WithAPIKey(os.Getenv("MY_OPENAI_KEY")),)Variables:
Variables defined in the pack are populated at runtime:
conv.SetVar("customer_id", "acme-corp")conv.SetVars(map[string]any{ "customer_name": "ACME Corporation", "tier": "premium",})Tools:
Tools defined in the pack just need implementation handlers:
conv.OnTool("list_devices", func(args map[string]any) (any, error) { return myAPI.ListDevices(args["customer_id"].(string))})Streaming:
Stream responses chunk by chunk:
for chunk := range conv.Stream(ctx, "Tell me a story") { fmt.Print(chunk.Text)}Design Principles
Section titled “Design Principles”- Pack is the Source of Truth - The .pack.json file defines prompts, tools, validators, and pipeline configuration. The SDK configures itself automatically.
- Convention Over Configuration - API keys from environment, provider auto-detection, model defaults from pack. Override only when needed.
- Progressive Disclosure - Simple things are simple, advanced features available but not required.
- Same Runtime, Same Behavior - SDK v2 uses the same runtime pipeline as Arena. Pack-defined behaviors work identically.
- Thin Wrapper - No type duplication. Core types like Message, ContentPart, CostInfo come directly from runtime/types.
Package Structure
Section titled “Package Structure”The SDK is organized into sub-packages for specific functionality:
- sdk (this package): Entry point, Open, Conversation, Response
- sdk/tools: Typed tool handlers, HITL support
- sdk/stream: Streaming response handling
- sdk/hooks: Event subscription and lifecycle hooks
- sdk/session: Session management and persistence
- sdk/agui: AG-UI protocol adapter
Most users only need to import the root sdk package.
Runtime Types
Section titled “Runtime Types”The SDK uses runtime types directly - no duplication:
import "github.com/AltairaLabs/PromptKit/runtime/types"
msg := &types.Message{Role: "user"}msg.AddTextPart("Hello")Key runtime types: [types.Message], [types.ContentPart], [types.MediaContent], [types.CostInfo], [types.ValidationResult].
Schema Reference
Section titled “Schema Reference”All pack examples conform to the PromptPack Specification v1.1.0: https://github.com/AltairaLabs/promptpack-spec/blob/main/schema/promptpack.schema.json
- Constants
- Variables
- func Evaluate(ctx context.Context, opts EvaluateOpts) ([]evals.EvalResult, error)
- func GracefulShutdown(mgr *ShutdownManager, timeout time.Duration)
- func ValidateEvalTypes(opts ValidateEvalTypesOpts) ([]evals.EvalDef, error)
- type A2AAgentBuilder
- func NewA2AAgent(url string) *A2AAgentBuilder
- func (b *A2AAgentBuilder) Build() *tools.A2AConfig
- func (b *A2AAgentBuilder) WithAuth(scheme, token string) *A2AAgentBuilder
- func (b *A2AAgentBuilder) WithAuthFromEnv(scheme, envVar string) *A2AAgentBuilder
- func (b *A2AAgentBuilder) WithHeader(key, value string) *A2AAgentBuilder
- func (b *A2AAgentBuilder) WithHeaderFromEnv(headerEnv string) *A2AAgentBuilder
- func (b *A2AAgentBuilder) WithRetryPolicy(maxRetries, initialDelayMs, maxDelayMs int) *A2AAgentBuilder
- func (b *A2AAgentBuilder) WithSkillFilter(filter *tools.A2ASkillFilter) *A2AAgentBuilder
- func (b *A2AAgentBuilder) WithTimeout(ms int) *A2AAgentBuilder
- type A2ACapability
- type A2AConversationOpener
- type A2AServer
- type A2AServerOption
- func WithA2ACard(card *a2a.AgentCard) A2AServerOption
- func WithA2AConversationTTL(d time.Duration) A2AServerOption
- func WithA2AIdleTimeout(d time.Duration) A2AServerOption
- func WithA2AMaxBodySize(n int64) A2AServerOption
- func WithA2APort(port int) A2AServerOption
- func WithA2AReadTimeout(d time.Duration) A2AServerOption
- func WithA2ATaskStore(store A2ATaskStore) A2AServerOption
- func WithA2ATaskTTL(d time.Duration) A2AServerOption
- func WithA2AWriteTimeout(d time.Duration) A2AServerOption
- type A2ATaskStore
- type AgentToolResolver
- func NewAgentToolResolver(pack *prompt.Pack) *AgentToolResolver
- func (r *AgentToolResolver) IsAgentTool(toolName string) bool
- func (r *AgentToolResolver) MemberNames() []string
- func (r *AgentToolResolver) ResolveAgentTools(toolNames []string) []*tools.ToolDescriptor
- func (r *AgentToolResolver) SetEndpointResolver(er EndpointResolver)
- type Capability
- type CapabilityContext
- type ChunkType
- type ClientToolHandler
- type ClientToolRequest
- type ClientToolRequestEvent
- type Conversation
- func Open(packPath, promptName string, opts …Option) (*Conversation, error)
- func OpenDuplex(packPath, promptName string, opts …Option) (*Conversation, error)
- func Resume(conversationID, packPath, promptName string, opts …Option) (*Conversation, error)
- func (c *Conversation) CheckPending(name string, args map[string]any) (*sdktools.PendingToolCall, bool)
- func (c *Conversation) Clear() error
- func (c *Conversation) Close() error
- func (c *Conversation) Continue(ctx context.Context) (*Response, error)
- func (c *Conversation) ContinueDuplex(ctx context.Context) error
- func (c *Conversation) Done() (<-chan struct{}, error)
- func (c *Conversation) EventBus() events.Bus
- func (c *Conversation) Fork() (*Conversation, error)
- func (c *Conversation) GetVar(name string) (string, bool)
- func (c *Conversation) ID() string
- func (c *Conversation) Messages(ctx context.Context) []types.Message
- func (c *Conversation) OnClientTool(name string, handler ClientToolHandler)
- func (c *Conversation) OnClientTools(handlers map[string]ClientToolHandler)
- func (c *Conversation) OnStreamEvent(handler StreamEventHandler)
- func (c *Conversation) OnTool(name string, handler ToolHandler)
- func (c *Conversation) OnToolAsync(name string, checkFunc func(args map[string]any) sdktools.PendingResult, execFunc ToolHandler)
- func (c *Conversation) OnToolCtx(name string, handler ToolHandlerCtx)
- func (c *Conversation) OnToolExecutor(name string, executor tools.Executor)
- func (c *Conversation) OnToolHTTP(name string, config *sdktools.HTTPToolConfig)
- func (c *Conversation) OnTools(handlers map[string]ToolHandler)
- func (c *Conversation) PendingTools() []*sdktools.PendingToolCall
- func (c *Conversation) RejectClientTool(_ context.Context, callID, reason string)
- func (c *Conversation) RejectTool(id, reason string) (*sdktools.ToolResolution, error)
- func (c *Conversation) ResolveTool(id string) (*sdktools.ToolResolution, error)
- func (c *Conversation) Response() (<-chan providers.StreamChunk, error)
- func (c *Conversation) Resume(ctx context.Context) (*Response, error)
- func (c *Conversation) ResumeStream(ctx context.Context) <-chan StreamChunk
- func (c *Conversation) Send(ctx context.Context, message any, opts …SendOption) (*Response, error)
- func (c *Conversation) SendChunk(ctx context.Context, chunk *providers.StreamChunk) error
- func (c *Conversation) SendFrame(ctx context.Context, frame *session.ImageFrame) error
- func (c *Conversation) SendText(ctx context.Context, text string) error
- func (c *Conversation) SendToolResult(_ context.Context, callID string, result any) error
- func (c *Conversation) SendToolResultMultimodal(_ context.Context, callID string, parts []types.ContentPart) error
- func (c *Conversation) SendVideoChunk(ctx context.Context, chunk *session.VideoChunk) error
- func (c *Conversation) SessionError() error
- func (c *Conversation) SetVar(name, value string)
- func (c *Conversation) SetVars(vars map[string]any)
- func (c *Conversation) SetVarsFromEnv(prefix string)
- func (c *Conversation) Stream(ctx context.Context, message any, opts …SendOption) <-chan StreamChunk
- func (c *Conversation) StreamRaw(ctx context.Context, message any) (<-chan streamPkg.Chunk, error)
- func (c *Conversation) StreamWithCallback(ctx context.Context, message any, opts …SendOption) (*Response, error)
- func (c *Conversation) ToolRegistry() *tools.Registry
- func (c *Conversation) TriggerStart(ctx context.Context, message string) error
- type CredentialOption
- type EndpointResolver
- type EvaluateOpts
- type InMemoryA2ATaskStore
- type LocalAgentExecutor
- type MCPServerBuilder
- func NewMCPServer(name, command string, args …string) *MCPServerBuilder
- func (b *MCPServerBuilder) Build() mcp.ServerConfig
- func (b *MCPServerBuilder) WithArgs(args …string) *MCPServerBuilder
- func (b *MCPServerBuilder) WithEnv(key, value string) *MCPServerBuilder
- func (b *MCPServerBuilder) WithTimeout(ms int) *MCPServerBuilder
- func (b *MCPServerBuilder) WithToolFilter(filter *mcp.ToolFilter) *MCPServerBuilder
- func (b *MCPServerBuilder) WithWorkingDir(dir string) *MCPServerBuilder
- type MapEndpointResolver
- type MemoryCapability
- func NewMemoryCapability(store memory.Store, scope map[string]string) *MemoryCapability
- func (c *MemoryCapability) Close() error
- func (c *MemoryCapability) Init(_ CapabilityContext) error
- func (c *MemoryCapability) Name() string
- func (c *MemoryCapability) RegisterTools(registry *tools.Registry)
- func (c *MemoryCapability) WithExtractor(e memory.Extractor) *MemoryCapability
- func (c *MemoryCapability) WithRetriever(r memory.Retriever) *MemoryCapability
- type MemoryOption
- type MultiAgentSession
- func OpenMultiAgent(packPath string, opts …Option) (*MultiAgentSession, error)
- func (s *MultiAgentSession) Close() error
- func (s *MultiAgentSession) Entry() *Conversation
- func (s *MultiAgentSession) Members() map[string]*Conversation
- func (s *MultiAgentSession) Send(ctx context.Context, message any, opts …SendOption) (*Response, error)
- type Option
- func WithA2AAgent(builder *A2AAgentBuilder) Option
- func WithA2ATools(bridge *a2a.ToolBridge) Option
- func WithAPIKey(key string) Option
- func WithAgentEndpoints(resolver EndpointResolver) Option
- func WithAutoResize(maxWidth, maxHeight int) Option
- func WithAutoSummarize(provider providers.Provider, threshold, batchSize int) Option
- func WithAzure(endpoint, providerType, model string, opts …PlatformOption) Option
- func WithBedrock(region, providerType, model string, opts …PlatformOption) Option
- func WithCapability(capability Capability) Option
- func WithCompaction(enabled bool) Option
- func WithCompactionRules(rules …stage.CompactionRule) Option
- func WithCompactionStrategy(strategy stage.CompactionStrategy) Option
- func WithContextCarryForward() Option
- func WithContextRetrieval(embeddingProvider providers.EmbeddingProvider, topK int) Option
- func WithContextWindow(recentMessages int) Option
- func WithConversationID(id string) Option
- func WithCredential(opts …CredentialOption) Option
- func WithEvalGroups(groups …string) Option
- func WithEvalRegistry(r *evals.EvalTypeRegistry) Option
- func WithEvalRunner(r *evals.EvalRunner) Option
- func WithEvalsDisabled() Option
- func WithEventBus(bus events.Bus) Option
- func WithEventStore(store events.EventStore) Option
- func WithExecutionTimeout(d time.Duration) Option
- func WithImagePreprocessing(cfg *stage.ImagePreprocessConfig) Option
- func WithJSONMode() Option
- func WithJudgeProvider(jp handlers.JudgeProvider) Option
- func WithLogger(l *slog.Logger) Option
- func WithMCP(name, command string, args …string) Option
- func WithMCPServer(builder *MCPServerBuilder) Option
- func WithMaxActiveSkillsOption(n int) Option
- func WithMaxConcurrentEvals(n int) Option
- func WithMaxMessageSize(bytes int) Option
- func WithMemory(store memory.Store, scope map[string]string, opts …MemoryOption) Option
- func WithMessageLog(log statestore.MessageLog) Option
- func WithMetricRecorder(r evals.MetricRecorder) Option
- func WithMetrics(collector *metrics.Collector, instanceLabels map[string]string) Option
- func WithModel(model string) Option
- func WithProvider(p providers.Provider) Option
- func WithProviderHook(h hooks.ProviderHook) Option
- func WithRecording(cfg *RecordingConfig) Option
- func WithRelevanceTruncation(cfg *RelevanceConfig) Option
- func WithResponseFormat(format *providers.ResponseFormat) Option
- func WithRuntimeConfig(path string) Option
- func WithSessionHook(h hooks.SessionHook) Option
- func WithSessionMetadata(metadata map[string]any) Option
- func WithShutdownManager(mgr *ShutdownManager) Option
- func WithSkillSelectorOption(s skills.SkillSelector) Option
- func WithSkillsDir(dir string) Option
- func WithSkipSchemaValidation() Option
- func WithStateStore(store statestore.Store) Option
- func WithStreamingConfig(streamingConfig *providers.StreamingInputConfig) Option
- func WithStreamingVideo(cfg *VideoStreamConfig) Option
- func WithTTS(service tts.Service) Option
- func WithTokenBudget(tokens int) Option
- func WithToolHook(h hooks.ToolHook) Option
- func WithToolRegistry(registry *tools.Registry) Option
- func WithTracerProvider(tp trace.TracerProvider) Option
- func WithTruncation(strategy string) Option
- func WithTurnDetector(detector audio.TurnDetector) Option
- func WithUserID(id string) Option
- func WithVADMode(sttService stt.Service, ttsService tts.Service, cfg *VADModeConfig) Option
- func WithVariableProvider(p variables.Provider) Option
- func WithVariables(vars map[string]string) Option
- func WithVertex(region, project, providerType, model string, opts …PlatformOption) Option
- type PackError
- type PackTemplate
- type PendingClientTool
- type PendingTool
- type PlatformOption
- func WithAWSProfile(profile string) PlatformOption
- func WithAWSRoleARN(arn string) PlatformOption
- func WithAzureClientSecret(tenantID, clientID, secret string) PlatformOption
- func WithAzureManagedIdentity(clientID string) PlatformOption
- func WithGCPServiceAccount(keyPath string) PlatformOption
- func WithPlatformEndpoint(endpoint string) PlatformOption
- func WithPlatformProject(project string) PlatformOption
- func WithPlatformRegion(region string) PlatformOption
- type ProviderError
- type RecordingConfig
- type RelevanceConfig
- type Response
- func NewResponseForTest(text string, toolCalls []types.MessageToolCall, opts …ResponseTestOption) *Response
- func (r *Response) ClientTools() []PendingClientTool
- func (r *Response) Cost() float64
- func (r *Response) Duration() time.Duration
- func (r *Response) HasMedia() bool
- func (r *Response) HasPendingClientTools() bool
- func (r *Response) HasToolCalls() bool
- func (r *Response) InputTokens() int
- func (r *Response) Message() *types.Message
- func (r *Response) OutputTokens() int
- func (r *Response) Parts() []types.ContentPart
- func (r *Response) PendingTools() []PendingTool
- func (r *Response) Text() string
- func (r *Response) TokensUsed() int
- func (r *Response) ToolCalls() []types.MessageToolCall
- func (r *Response) Validations() []types.ValidationResult
- type ResponseTestOption
- type SendOption
- func WithAudioData(data []byte, mimeType string) SendOption
- func WithAudioFile(path string) SendOption
- func WithDocumentData(data []byte, mimeType string) SendOption
- func WithDocumentFile(path string) SendOption
- func WithFile(name string, data []byte) SendOption
- func WithImageData(data []byte, mimeType string, detail …*string) SendOption
- func WithImageFile(path string, detail …*string) SendOption
- func WithImageURL(url string, detail …*string) SendOption
- func WithVideoData(data []byte, mimeType string) SendOption
- func WithVideoFile(path string) SendOption
- type SessionMode
- type ShutdownManager
- type SkillsCapability
- func NewSkillsCapability(sources []skills.SkillSource, opts …SkillsOption) *SkillsCapability
- func (c *SkillsCapability) Close() error
- func (c *SkillsCapability) Executor() *skills.Executor
- func (c *SkillsCapability) Init(ctx CapabilityContext) error
- func (c *SkillsCapability) Name() string
- func (c *SkillsCapability) RegisterTools(registry *tools.Registry)
- type SkillsOption
- type StatefulCapability
- type StaticEndpointResolver
- type StreamChunk
- type StreamDoneEvent
- type StreamEvent
- type StreamEventHandler
- type TextDeltaEvent
- type ToolError
- type ToolHandler
- type ToolHandlerCtx
- type VADModeConfig
- type ValidateEvalTypesOpts
- type ValidationError
- type VideoStreamConfig
- type WorkflowCapability
- func NewWorkflowCapability() *WorkflowCapability
- func (w *WorkflowCapability) Close() error
- func (w *WorkflowCapability) Init(ctx CapabilityContext) error
- func (w *WorkflowCapability) Name() string
- func (w *WorkflowCapability) RegisterTools(_ *tools.Registry)
- func (w *WorkflowCapability) RegisterToolsForState(registry *tools.Registry, state *workflow.State)
- type WorkflowConversation
- func OpenWorkflow(packPath string, opts …Option) (*WorkflowConversation, error)
- func ResumeWorkflow(workflowID, packPath string, opts …Option) (*WorkflowConversation, error)
- func (wc *WorkflowConversation) ActiveConversation() *Conversation
- func (wc *WorkflowConversation) AvailableEvents() []string
- func (wc *WorkflowConversation) Close() error
- func (wc *WorkflowConversation) Context() *workflow.Context
- func (wc *WorkflowConversation) CurrentPromptTask() string
- func (wc *WorkflowConversation) CurrentState() string
- func (wc *WorkflowConversation) IsComplete() bool
- func (wc *WorkflowConversation) OrchestrationMode() workflow.Orchestration
- func (wc *WorkflowConversation) Send(ctx context.Context, message any, opts …SendOption) (*Response, error)
- func (wc *WorkflowConversation) Transition(event string) (string, error)
Constants
Section titled “Constants”DefaultMaxConcurrentEvals is the default maximum number of concurrent eval goroutines.
const DefaultMaxConcurrentEvals = 10Variables
Section titled “Variables”Re-exported constructors and sentinel errors.
var ( NewInMemoryA2ATaskStore = a2aserver.NewInMemoryTaskStore
ErrTaskNotFound = a2aserver.ErrTaskNotFound ErrTaskAlreadyExists = a2aserver.ErrTaskAlreadyExists ErrInvalidTransition = a2aserver.ErrInvalidTransition ErrTaskTerminal = a2aserver.ErrTaskTerminal)Sentinel errors for common failure cases.
var ( // ErrConversationClosed is returned when Send or Stream is called on a closed conversation. ErrConversationClosed = errors.New("conversation is closed")
// ErrConversationNotFound is returned by Resume when the conversation ID doesn't exist. ErrConversationNotFound = errors.New("conversation not found")
// ErrNoStateStore is returned by Resume when no state store is configured. ErrNoStateStore = errors.New("no state store configured")
// ErrPromptNotFound is returned when the specified prompt doesn't exist in the pack. ErrPromptNotFound = errors.New("prompt not found in pack")
// ErrPackNotFound is returned when the pack file doesn't exist. ErrPackNotFound = errors.New("pack file not found")
// ErrProviderNotDetected is returned when no provider could be auto-detected. ErrProviderNotDetected = errors.New("could not detect provider: no API keys found in environment")
// ErrToolNotRegistered is returned when the LLM calls a tool that has no handler. ErrToolNotRegistered = errors.New("tool handler not registered")
// ErrToolNotInPack is returned when trying to register a handler for a tool not in the pack. ErrToolNotInPack = errors.New("tool not defined in pack")
// ErrNoWorkflow is returned when OpenWorkflow is called on a pack without a workflow section. ErrNoWorkflow = errors.New("pack has no workflow section")
// ErrWorkflowClosed is returned when Send or Transition is called on a closed WorkflowConversation. ErrWorkflowClosed = errors.New("workflow conversation is closed")
// ErrWorkflowTerminal is returned when Transition is called on a terminal state. ErrWorkflowTerminal = errors.New("workflow is in terminal state")
// ErrMessageTooLarge is returned when a user message exceeds the configured maximum size. ErrMessageTooLarge = errors.New("message exceeds maximum size"))ErrShutdownManagerClosed is returned when Register is called after Shutdown.
var ErrShutdownManagerClosed = errors.New("shutdown manager is closed; cannot register new conversations")func Evaluate
Section titled “func Evaluate”func Evaluate(ctx context.Context, opts EvaluateOpts) ([]evals.EvalResult, error)Evaluate runs evals from a PromptPack against a conversation snapshot. No live agent or provider connection is needed — just messages in, results out.
Eval definitions can come from three sources (checked in order):
- EvalDefs — pass pre-resolved definitions directly
- PackData — parse a PromptPack from JSON bytes
- PackPath — load a PromptPack from the filesystem
The function builds an [evals.EvalContext] from the provided messages, dispatches to the appropriate runner method based on Trigger, and optionally emits events on the EventBus.
func GracefulShutdown
Section titled “func GracefulShutdown”func GracefulShutdown(mgr *ShutdownManager, timeout time.Duration)GracefulShutdown listens for SIGTERM and SIGINT, then calls mgr.Shutdown with the given timeout. It is designed to be called as a goroutine:
go sdk.GracefulShutdown(mgr, 30*time.Second)func ValidateEvalTypes
Section titled “func ValidateEvalTypes”func ValidateEvalTypes(opts ValidateEvalTypesOpts) ([]evals.EvalDef, error)ValidateEvalTypes checks that every eval type referenced in the resolved eval definitions has a registered handler in the EvalTypeRegistry. Returns a list of eval IDs whose types are missing, or nil if all are valid.
This is useful as a preflight check — e.g. at startup or in CI — to catch configuration errors (typos, missing RuntimeConfig bindings) before evals are actually executed.
type A2AAgentBuilder
Section titled “type A2AAgentBuilder”A2AAgentBuilder provides a fluent interface for configuring A2A agent connections.
type A2AAgentBuilder struct { // contains filtered or unexported fields}func NewA2AAgent
Section titled “func NewA2AAgent”func NewA2AAgent(url string) *A2AAgentBuilderNewA2AAgent creates a new A2A agent configuration builder.
agent := sdk.NewA2AAgent("https://agent.example.com"). WithAuth("Bearer", os.Getenv("AGENT_TOKEN")). WithHeader("X-Tenant-ID", "acme")
conv, _ := sdk.Open("./assistant.pack.json", "assistant", sdk.WithA2AAgent(agent),)func (*A2AAgentBuilder) Build
Section titled “func (*A2AAgentBuilder) Build”func (b *A2AAgentBuilder) Build() *tools.A2AConfigBuild returns the A2AConfig for this agent.
func (*A2AAgentBuilder) WithAuth
Section titled “func (*A2AAgentBuilder) WithAuth”func (b *A2AAgentBuilder) WithAuth(scheme, token string) *A2AAgentBuilderWithAuth sets authentication credentials for the A2A agent.
func (*A2AAgentBuilder) WithAuthFromEnv
Section titled “func (*A2AAgentBuilder) WithAuthFromEnv”func (b *A2AAgentBuilder) WithAuthFromEnv(scheme, envVar string) *A2AAgentBuilderWithAuthFromEnv sets authentication using an environment variable for the token.
func (*A2AAgentBuilder) WithHeader
Section titled “func (*A2AAgentBuilder) WithHeader”func (b *A2AAgentBuilder) WithHeader(key, value string) *A2AAgentBuilderWithHeader adds a static header to all requests to this agent.
func (*A2AAgentBuilder) WithHeaderFromEnv
Section titled “func (*A2AAgentBuilder) WithHeaderFromEnv”func (b *A2AAgentBuilder) WithHeaderFromEnv(headerEnv string) *A2AAgentBuilderWithHeaderFromEnv adds a header that reads its value from an environment variable. Format: “Header-Name=ENV_VAR_NAME”
func (*A2AAgentBuilder) WithRetryPolicy
Section titled “func (*A2AAgentBuilder) WithRetryPolicy”func (b *A2AAgentBuilder) WithRetryPolicy(maxRetries, initialDelayMs, maxDelayMs int) *A2AAgentBuilderWithRetryPolicy sets the retry policy for this agent.
func (*A2AAgentBuilder) WithSkillFilter
Section titled “func (*A2AAgentBuilder) WithSkillFilter”func (b *A2AAgentBuilder) WithSkillFilter(filter *tools.A2ASkillFilter) *A2AAgentBuilderWithSkillFilter sets a skill filter that controls which skills from this agent are exposed to the LLM.
func (*A2AAgentBuilder) WithTimeout
Section titled “func (*A2AAgentBuilder) WithTimeout”func (b *A2AAgentBuilder) WithTimeout(ms int) *A2AAgentBuilderWithTimeout sets the request timeout in milliseconds.
type A2ACapability
Section titled “type A2ACapability”A2ACapability provides A2A agent tools to conversations. It unifies both the bridge path (explicit WithA2ATools) and the pack path (agents section in pack) under a single capability.
type A2ACapability struct { // contains filtered or unexported fields}func NewA2ACapability
Section titled “func NewA2ACapability”func NewA2ACapability() *A2ACapabilityNewA2ACapability creates a new A2ACapability.
func (*A2ACapability) Close
Section titled “func (*A2ACapability) Close”func (c *A2ACapability) Close() errorClose is a no-op for A2ACapability.
func (*A2ACapability) Init
Section titled “func (*A2ACapability) Init”func (c *A2ACapability) Init(ctx CapabilityContext) errorInit initializes the capability with pack context. If the pack has an agents section, it creates an AgentToolResolver.
func (*A2ACapability) Name
Section titled “func (*A2ACapability) Name”func (c *A2ACapability) Name() stringName returns the capability identifier.
func (*A2ACapability) RegisterTools
Section titled “func (*A2ACapability) RegisterTools”func (c *A2ACapability) RegisterTools(registry *tools.Registry)RegisterTools registers A2A tools into the registry. Bridge path: registers bridge tool descriptors + A2A executor. Pack path: resolves agent tools from prompt tools list + registers executor.
type A2AConversationOpener
Section titled “type A2AConversationOpener”A2AConversationOpener creates or retrieves a conversation for a context ID.
type A2AConversationOpener = a2aserver.ConversationOpenerfunc A2AOpener
Section titled “func A2AOpener”func A2AOpener(packPath, promptName string, opts ...Option) A2AConversationOpenerA2AOpener returns an A2AConversationOpener backed by SDK conversations. Each call to the returned function opens a new conversation for the given context ID using sdk.Open with the provided pack path, prompt name, and options.
type A2AServer
Section titled “type A2AServer”A2AServer is the A2A-protocol HTTP server.
type A2AServer = a2aserver.Serverfunc NewA2AServer
Section titled “func NewA2AServer”func NewA2AServer(opener A2AConversationOpener, opts ...A2AServerOption) *A2AServerNewA2AServer creates a new A2A server with the given opener and options.
type A2AServerOption
Section titled “type A2AServerOption”A2AServerOption configures an A2AServer.
type A2AServerOption = a2aserver.Optionfunc WithA2ACard
Section titled “func WithA2ACard”func WithA2ACard(card *a2a.AgentCard) A2AServerOptionWithA2ACard sets the agent card served at /.well-known/agent.json.
func WithA2AConversationTTL
Section titled “func WithA2AConversationTTL”func WithA2AConversationTTL(d time.Duration) A2AServerOptionWithA2AConversationTTL sets the conversation TTL for eviction.
func WithA2AIdleTimeout
Section titled “func WithA2AIdleTimeout”func WithA2AIdleTimeout(d time.Duration) A2AServerOptionWithA2AIdleTimeout sets the idle timeout.
func WithA2AMaxBodySize
Section titled “func WithA2AMaxBodySize”func WithA2AMaxBodySize(n int64) A2AServerOptionWithA2AMaxBodySize sets the max body size.
func WithA2APort
Section titled “func WithA2APort”func WithA2APort(port int) A2AServerOptionWithA2APort sets the TCP port for ListenAndServe.
func WithA2AReadTimeout
Section titled “func WithA2AReadTimeout”func WithA2AReadTimeout(d time.Duration) A2AServerOptionWithA2AReadTimeout sets the read timeout.
func WithA2ATaskStore
Section titled “func WithA2ATaskStore”func WithA2ATaskStore(store A2ATaskStore) A2AServerOptionWithA2ATaskStore sets a custom task store.
func WithA2ATaskTTL
Section titled “func WithA2ATaskTTL”func WithA2ATaskTTL(d time.Duration) A2AServerOptionWithA2ATaskTTL sets the task TTL for eviction.
func WithA2AWriteTimeout
Section titled “func WithA2AWriteTimeout”func WithA2AWriteTimeout(d time.Duration) A2AServerOptionWithA2AWriteTimeout sets the write timeout.
type A2ATaskStore
Section titled “type A2ATaskStore”A2ATaskStore is the task persistence interface.
type A2ATaskStore = a2aserver.TaskStoretype AgentToolResolver
Section titled “type AgentToolResolver”AgentToolResolver resolves agent member references in tool lists to A2A-compatible tool descriptors.
type AgentToolResolver struct { // contains filtered or unexported fields}func NewAgentToolResolver
Section titled “func NewAgentToolResolver”func NewAgentToolResolver(pack *prompt.Pack) *AgentToolResolverNewAgentToolResolver creates a resolver from a compiled pack. Returns nil if the pack has no agents section.
func (*AgentToolResolver) IsAgentTool
Section titled “func (*AgentToolResolver) IsAgentTool”func (r *AgentToolResolver) IsAgentTool(toolName string) boolIsAgentTool checks if a tool name refers to an agent member. It accepts both bare member keys (“summarizer”) and qualified names (“a2a__summarizer”).
func (*AgentToolResolver) MemberNames
Section titled “func (*AgentToolResolver) MemberNames”func (r *AgentToolResolver) MemberNames() []stringMemberNames returns the names of all agent members known to this resolver.
func (*AgentToolResolver) ResolveAgentTools
Section titled “func (*AgentToolResolver) ResolveAgentTools”func (r *AgentToolResolver) ResolveAgentTools(toolNames []string) []*tools.ToolDescriptorResolveAgentTools returns tool descriptors for all agent members that appear in the given tool names list. Each descriptor has Mode “a2a”, an input schema with a required “query” field, and (if an EndpointResolver is set) an AgentURL in A2AConfig.
func (*AgentToolResolver) SetEndpointResolver
Section titled “func (*AgentToolResolver) SetEndpointResolver”func (r *AgentToolResolver) SetEndpointResolver(er EndpointResolver)SetEndpointResolver configures how agent URLs are resolved. When nil, descriptors are created without an AgentURL (suitable for testing or when endpoints are resolved later).
type Capability
Section titled “type Capability”Capability represents a platform feature that provides namespaced tools. Capabilities are auto-inferred from pack structure or explicitly added via WithCapability.
type Capability interface { // Name returns the capability identifier (e.g., "workflow", "a2a"). Name() string
// Init initializes the capability with pack context. Init(ctx CapabilityContext) error
// RegisterTools registers the capability's tools into the registry. RegisterTools(registry *tools.Registry)
// Close releases any resources held by the capability. Close() error}type CapabilityContext
Section titled “type CapabilityContext”CapabilityContext provides read-only access to pack and config during Init.
type CapabilityContext struct { Pack *pack.Pack PromptName string UserID string Metadata map[string]any}type ChunkType
Section titled “type ChunkType”ChunkType identifies the type of a streaming chunk.
type ChunkType intconst ( // ChunkText indicates the chunk contains text content. ChunkText ChunkType = iota
// ChunkToolCall indicates the chunk contains a tool call. ChunkToolCall
// ChunkMedia indicates the chunk contains media content. ChunkMedia
// ChunkDone indicates streaming is complete. ChunkDone
// ChunkClientTool indicates a client tool request that needs caller fulfillment. ChunkClientTool)func (ChunkType) String
Section titled “func (ChunkType) String”func (t ChunkType) String() stringString returns the string representation of the chunk type.
type ClientToolHandler
Section titled “type ClientToolHandler”ClientToolHandler is a function that fulfillls a client-side tool call. It receives a context (carrying the tool timeout from ClientConfig.TimeoutMs) and a ClientToolRequest with the invocation details.
The return value should be JSON-serializable and will be sent back to the LLM as the tool result.
type ClientToolHandler func(ctx context.Context, req ClientToolRequest) (any, error)type ClientToolRequest
Section titled “type ClientToolRequest”ClientToolRequest contains information about a client-side tool invocation. It is passed to handlers registered via Conversation.OnClientTool.
type ClientToolRequest struct { // ToolName is the tool's name as defined in the pack. ToolName string
// CallID is the provider-assigned ID for this particular invocation. CallID string
// Args contains the parsed arguments from the LLM. Args map[string]any
// ConsentMsg is the human-readable consent message from the pack's // client.consent.message field. Empty when no consent is configured. ConsentMsg string
// Categories are the semantic consent categories (e.g., ["location"]). Categories []string
// Descriptor provides the full tool descriptor for advanced use cases. Descriptor *tools.ToolDescriptor}type ClientToolRequestEvent
Section titled “type ClientToolRequestEvent”ClientToolRequestEvent is emitted when the pipeline encounters a client tool that needs caller fulfillment.
type ClientToolRequestEvent struct { // CallID is the provider-assigned ID for this tool invocation. CallID string
// ToolName is the tool's name as defined in the pack. ToolName string
// Args contains the parsed arguments from the LLM. Args map[string]any
// ConsentMsg is the human-readable consent message. ConsentMsg string
// Categories are the semantic consent categories. Categories []string}type Conversation
Section titled “type Conversation”Conversation represents an active LLM conversation.
A conversation maintains:
- Connection to the LLM provider
- Message history (context)
- Variable state for template substitution
- Tool handlers for function calling
- Validation state
Conversations are created via Open or Resume and are safe for concurrent use. Each Open call creates an independent conversation with isolated state.
Basic usage:
conv, _ := sdk.Open("./assistant.pack.json", "chat")conv.SetVar("user_name", "Alice")
resp, _ := conv.Send(ctx, "Hello!")fmt.Println(resp.Text())
resp, _ = conv.Send(ctx, "What's my name?") // Remembers contextfmt.Println(resp.Text()) // "Your name is Alice"type Conversation struct { // contains filtered or unexported fields}func Open
Section titled “func Open”func Open(packPath, promptName string, opts ...Option) (*Conversation, error)Open loads a pack file and creates a new conversation for the specified prompt.
This is the primary entry point for SDK v2. It:
- Loads and parses the pack file
- Auto-detects the provider from environment (OPENAI_API_KEY, ANTHROPIC_API_KEY, etc.)
- Configures the runtime pipeline based on pack settings
- Creates an isolated conversation with its own state
Basic usage:
conv, err := sdk.Open("./assistant.pack.json", "chat")if err != nil { log.Fatal(err)}defer conv.Close()
resp, _ := conv.Send(ctx, "Hello!")fmt.Println(resp.Text())With options:
conv, err := sdk.Open("./assistant.pack.json", "chat", sdk.WithModel("gpt-4o"), sdk.WithAPIKey(os.Getenv("MY_KEY")), sdk.WithStateStore(redisStore),)The packPath can be:
- Absolute path: “/path/to/assistant.pack.json”
- Relative path: ”./packs/assistant.pack.json”
- URL: “https://example.com/packs/assistant.pack.json” (future)
The promptName must match a prompt ID defined in the pack’s “prompts” section.
func OpenDuplex
Section titled “func OpenDuplex”func OpenDuplex(packPath, promptName string, opts ...Option) (*Conversation, error)OpenDuplex loads a pack file and creates a new duplex streaming conversation for the specified prompt.
This creates a conversation in duplex mode for bidirectional streaming interactions. Use this when you need real-time streaming input/output with the LLM.
Basic usage:
conv, err := sdk.OpenDuplex("./assistant.pack.json", "chat")if err != nil { log.Fatal(err)}defer conv.Close()
// Send streaming inputgo func() { conv.SendText(ctx, "Hello, ") conv.SendText(ctx, "how are you?")}()
// Receive streaming outputrespCh, _ := conv.Response()for chunk := range respCh { fmt.Print(chunk.Content)}The provider must support streaming input (implement providers.StreamInputSupport). Currently supported providers: Gemini with certain models.
func Resume
Section titled “func Resume”func Resume(conversationID, packPath, promptName string, opts ...Option) (*Conversation, error)Resume loads an existing conversation from state storage.
Use this to continue a conversation that was previously persisted:
store := statestore.NewRedisStore("redis://localhost:6379")conv, err := sdk.Resume("session-123", "./chat.pack.json", "assistant", sdk.WithStateStore(store),)if errors.Is(err, sdk.ErrConversationNotFound) { // Start new conversation conv, _ = sdk.Open("./chat.pack.json", "assistant", sdk.WithStateStore(store), sdk.WithConversationID("session-123"), )}Resume requires a state store to be configured. If no state store is provided, it returns ErrNoStateStore.
func (*Conversation) CheckPending
Section titled “func (*Conversation) CheckPending”func (c *Conversation) CheckPending(name string, args map[string]any) (*sdktools.PendingToolCall, bool)CheckPending checks if a tool call should be pending and creates it if so. Returns (pending call, should wait) - if should wait is true, the tool shouldn’t execute yet.
This method is used internally when processing tool calls from the LLM. It can also be useful for testing HITL workflows:
pending, shouldWait := conv.CheckPending("risky_tool", args)if shouldWait { // Tool requires approval}func (*Conversation) Clear
Section titled “func (*Conversation) Clear”func (c *Conversation) Clear() errorClear removes all messages from the conversation history.
This keeps the system prompt and variables but removes all user/assistant messages. Useful for starting fresh within the same conversation session. In duplex mode, this will close the session first if actively streaming.
func (*Conversation) Close
Section titled “func (*Conversation) Close”func (c *Conversation) Close() errorClose releases resources associated with the conversation.
After Close is called, Send and Stream will return ErrConversationClosed. It’s safe to call Close multiple times.
func (*Conversation) Continue
Section titled “func (*Conversation) Continue”func (c *Conversation) Continue(ctx context.Context) (*Response, error)Continue resumes conversation after resolving pending tools.
Call this after approving/rejecting all pending tools to continue the conversation with the tool results:
resp, _ := conv.Send(ctx, "Process refund")for _, pending := range resp.PendingTools() { conv.ResolveTool(pending.ID)}resp, _ = conv.Continue(ctx) // LLM receives tool resultsfunc (*Conversation) ContinueDuplex
Section titled “func (*Conversation) ContinueDuplex”func (c *Conversation) ContinueDuplex(ctx context.Context) errorContinueDuplex sends resolved/rejected HITL tool results back into the duplex stream. Unlike Continue() (which re-executes the unary pipeline), this pushes tool results into the live duplex pipeline via SubmitToolResults.
Usage:
for chunk := range conv.Response() { if len(chunk.PendingTools) > 0 { for _, pt := range chunk.PendingTools { conv.ResolveTool(pt.CallID) // or RejectTool } conv.ContinueDuplex(ctx) }}func (*Conversation) Done
Section titled “func (*Conversation) Done”func (c *Conversation) Done() (<-chan struct{}, error)Done returns a channel that’s closed when the duplex session ends. Only available when the conversation was opened with OpenDuplex().
func (*Conversation) EventBus
Section titled “func (*Conversation) EventBus”func (c *Conversation) EventBus() events.BusEventBus returns the conversation’s event bus for observability.
Use this to subscribe to runtime events like tool calls, validations, and provider requests:
conv.EventBus().Subscribe(events.EventToolCallStarted, func(e *events.Event) { log.Printf("Tool call: %s", e.Data.(*events.ToolCallStartedData).ToolName)})For convenience methods, see the [hooks] package.
func (*Conversation) Fork
Section titled “func (*Conversation) Fork”func (c *Conversation) Fork() (*Conversation, error)Fork creates a copy of the current conversation state.
Use this to explore alternative conversation branches:
conv.Send(ctx, "I want to plan a trip")conv.Send(ctx, "What cities should I visit?")
// Fork to explore different pathsbranch, err := conv.Fork()
conv.Send(ctx, "Tell me about Tokyo") // Original pathbranch.Send(ctx, "Tell me about Kyoto") // Branch pathThe forked conversation is completely independent - changes to one do not affect the other.
func (*Conversation) GetVar
Section titled “func (*Conversation) GetVar”func (c *Conversation) GetVar(name string) (string, bool)GetVar returns the current value of a template variable. Returns empty string and false if the variable is not set.
func (*Conversation) ID
Section titled “func (*Conversation) ID”func (c *Conversation) ID() stringID returns the conversation’s unique identifier.
func (*Conversation) Messages
Section titled “func (*Conversation) Messages”func (c *Conversation) Messages(ctx context.Context) []types.MessageMessages returns the conversation history.
The returned slice is a copy - modifying it does not affect the conversation.
func (*Conversation) OnClientTool
Section titled “func (*Conversation) OnClientTool”func (c *Conversation) OnClientTool(name string, handler ClientToolHandler)OnClientTool registers a handler for a client-side tool.
Client tools (mode: “client”) are tools that must be fulfillled on the caller’s device — for example GPS, camera, or biometric sensors. The handler is invoked synchronously when the LLM calls the tool.
Example:
conv.OnClientTool("get_location", func(ctx context.Context, req sdk.ClientToolRequest) (any, error) { coords, err := deviceGPS(ctx, req.Args["accuracy"].(string)) if err != nil { return nil, err } return map[string]any{"lat": coords.Lat, "lng": coords.Lng}, nil})func (*Conversation) OnClientTools
Section titled “func (*Conversation) OnClientTools”func (c *Conversation) OnClientTools(handlers map[string]ClientToolHandler)OnClientTools registers multiple client tool handlers at once.
conv.OnClientTools(map[string]sdk.ClientToolHandler{ "get_location": locationHandler, "read_contacts": contactsHandler,})func (*Conversation) OnStreamEvent
Section titled “func (*Conversation) OnStreamEvent”func (c *Conversation) OnStreamEvent(handler StreamEventHandler)OnStreamEvent registers a handler that will be called for each stream event during Conversation.StreamWithCallback.
Example:
conv.OnStreamEvent(func(event sdk.StreamEvent) { switch e := event.(type) { case sdk.TextDeltaEvent: fmt.Print(e.Delta) case sdk.ClientToolRequestEvent: fmt.Printf("Tool %s needs fulfillment\n", e.ToolName) case sdk.StreamDoneEvent: fmt.Println("\nDone!") }})func (*Conversation) OnTool
Section titled “func (*Conversation) OnTool”func (c *Conversation) OnTool(name string, handler ToolHandler)OnTool registers a handler for a tool defined in the pack.
The tool name must match a tool defined in the pack’s tools section. When the LLM calls the tool, your handler receives the parsed arguments and returns a result.
conv.OnTool("get_weather", func(args map[string]any) (any, error) { city := args["city"].(string) return weatherAPI.GetCurrent(city)})The handler’s return value is automatically serialized to JSON and sent back to the LLM as the tool result.
func (*Conversation) OnToolAsync
Section titled “func (*Conversation) OnToolAsync”func (c *Conversation) OnToolAsync(name string, checkFunc func(args map[string]any) sdktools.PendingResult, execFunc ToolHandler)OnToolAsync registers a handler that may require approval before execution.
Use this for Human-in-the-Loop (HITL) workflows where certain actions require human approval before proceeding:
conv.OnToolAsync("process_refund", func(args map[string]any) sdk.PendingResult { amount := args["amount"].(float64) if amount > 1000 { return sdk.PendingResult{ Reason: "high_value_refund", Message: fmt.Sprintf("Refund of $%.2f requires approval", amount), } } return sdk.PendingResult{} // Proceed immediately}, func(args map[string]any) (any, error) { // Execute the actual refund return refundAPI.Process(args)})The first function checks if approval is needed, the second executes the action.
Lock ordering contract: asyncHandlersMu is acquired first, then handlersMu. All code paths that acquire both locks must follow this order to avoid deadlock.
func (*Conversation) OnToolCtx
Section titled “func (*Conversation) OnToolCtx”func (c *Conversation) OnToolCtx(name string, handler ToolHandlerCtx)OnToolCtx registers a context-aware handler for a tool.
Use this when your tool implementation needs the request context for cancellation, deadlines, or tracing:
conv.OnToolCtx("search_db", func(ctx context.Context, args map[string]any) (any, error) { return db.SearchWithContext(ctx, args["query"].(string))})func (*Conversation) OnToolExecutor
Section titled “func (*Conversation) OnToolExecutor”func (c *Conversation) OnToolExecutor(name string, executor tools.Executor)OnToolExecutor registers a custom executor for tools.
Use this when you need full control over tool execution or want to use a runtime executor directly:
executor := &MyCustomExecutor{}conv.OnToolExecutor("custom_tool", executor)The executor must implement the runtime/tools.Executor interface.
func (*Conversation) OnToolHTTP
Section titled “func (*Conversation) OnToolHTTP”func (c *Conversation) OnToolHTTP(name string, config *sdktools.HTTPToolConfig)OnToolHTTP registers a tool that makes HTTP requests.
This is a convenience method for tools that call external APIs:
conv.OnToolHTTP("create_ticket", sdktools.NewHTTPToolConfig( "https://api.tickets.example.com/tickets", sdktools.WithMethod("POST"), sdktools.WithHeader("Authorization", "Bearer "+apiKey), sdktools.WithTimeout(5000),))The tool arguments from the LLM are serialized to JSON and sent as the request body. The response is parsed and returned to the LLM.
func (*Conversation) OnTools
Section titled “func (*Conversation) OnTools”func (c *Conversation) OnTools(handlers map[string]ToolHandler)OnTools registers multiple tool handlers at once.
conv.OnTools(map[string]sdk.ToolHandler{ "get_weather": getWeatherHandler, "search_docs": searchDocsHandler, "send_email": sendEmailHandler,})func (*Conversation) PendingTools
Section titled “func (*Conversation) PendingTools”func (c *Conversation) PendingTools() []*sdktools.PendingToolCallPendingTools returns all pending tool calls awaiting approval.
func (*Conversation) RejectClientTool
Section titled “func (*Conversation) RejectClientTool”func (c *Conversation) RejectClientTool(_ context.Context, callID, reason string)RejectClientTool rejects a deferred client tool with a human-readable reason.
callID must match one of the [PendingClientTool.CallID] values returned in the Response. The rejection reason is sent to the LLM as the tool result.
func (*Conversation) RejectTool
Section titled “func (*Conversation) RejectTool”func (c *Conversation) RejectTool(id, reason string) (*sdktools.ToolResolution, error)RejectTool rejects a pending tool call.
Use this when the human reviewer decides not to approve the tool:
resp, _ := conv.RejectTool(pending.ID, "Not authorized for this amount")func (*Conversation) ResolveTool
Section titled “func (*Conversation) ResolveTool”func (c *Conversation) ResolveTool(id string) (*sdktools.ToolResolution, error)ResolveTool approves and executes a pending tool call.
After calling Send() and receiving pending tools in the response, use this to approve and execute them:
resp, _ := conv.Send(ctx, "Process refund for order #12345")if len(resp.PendingTools()) > 0 { pending := resp.PendingTools()[0] // ... get approval ... result, _ := conv.ResolveTool(pending.ID) // Continue the conversation with the result resp, _ = conv.Continue(ctx)}func (*Conversation) Response
Section titled “func (*Conversation) Response”func (c *Conversation) Response() (<-chan providers.StreamChunk, error)Response returns the response channel for duplex streaming. Only available when the conversation was opened with OpenDuplex().
func (*Conversation) Resume
Section titled “func (*Conversation) Resume”func (c *Conversation) Resume(ctx context.Context) (*Response, error)Resume continues pipeline execution after all deferred client tools have been resolved via Conversation.SendToolResult or Conversation.RejectClientTool.
The resolved tool results are injected as tool-result messages and a new LLM round is triggered. The returned Response contains the assistant’s reply.
func (*Conversation) ResumeStream
Section titled “func (*Conversation) ResumeStream”func (c *Conversation) ResumeStream(ctx context.Context) <-chan StreamChunkResumeStream is the streaming equivalent of Conversation.Resume.
It continues pipeline execution after deferred client tools have been resolved, returning a channel of StreamChunk values. The final chunk (Type == ChunkDone) contains the complete Response.
Example:
conv.SendToolResult(ctx, "call-1", locationData)for chunk := range conv.ResumeStream(ctx) { if chunk.Error != nil { break } fmt.Print(chunk.Text)}func (*Conversation) Send
Section titled “func (*Conversation) Send”func (c *Conversation) Send(ctx context.Context, message any, opts ...SendOption) (*Response, error)Send sends a message to the LLM and returns the response.
The message can be a simple string or a *types.Message for multimodal content. Variables are substituted into the system prompt template before sending.
Basic usage:
resp, err := conv.Send(ctx, "Hello!")if err != nil { log.Fatal(err)}fmt.Println(resp.Text())With message options:
resp, err := conv.Send(ctx, "What's in this image?", sdk.WithImageFile("/path/to/image.jpg"),)Send automatically:
- Substitutes variables into the system prompt
- Runs any registered validators
- Handles tool calls if tools are defined
- Persists state if a state store is configured
func (*Conversation) SendChunk
Section titled “func (*Conversation) SendChunk”func (c *Conversation) SendChunk(ctx context.Context, chunk *providers.StreamChunk) errorSendChunk sends a streaming chunk in duplex mode. Only available when the conversation was opened with OpenDuplex().
func (*Conversation) SendFrame
Section titled “func (*Conversation) SendFrame”func (c *Conversation) SendFrame(ctx context.Context, frame *session.ImageFrame) errorSendFrame sends an image frame in duplex mode for realtime video scenarios. Only available when the conversation was opened with OpenDuplex().
Example:
frame := &session.ImageFrame{ Data: jpegBytes, MIMEType: "image/jpeg", Timestamp: time.Now(),}conv.SendFrame(ctx, frame)func (*Conversation) SendText
Section titled “func (*Conversation) SendText”func (c *Conversation) SendText(ctx context.Context, text string) errorSendText sends text in duplex mode. Only available when the conversation was opened with OpenDuplex().
func (*Conversation) SendToolResult
Section titled “func (*Conversation) SendToolResult”func (c *Conversation) SendToolResult(_ context.Context, callID string, result any) errorSendToolResult provides the result for a deferred client tool.
callID must match one of the [PendingClientTool.CallID] values returned in the Response. result should be JSON-serializable.
After all pending tools have been resolved (via SendToolResult or RejectClientTool), call Conversation.Resume to continue the pipeline.
func (*Conversation) SendToolResultMultimodal
Section titled “func (*Conversation) SendToolResultMultimodal”func (c *Conversation) SendToolResultMultimodal(_ context.Context, callID string, parts []types.ContentPart) errorSendToolResultMultimodal provides a multimodal result for a deferred client tool.
callID must match one of the [PendingClientTool.CallID] values returned in the Response. parts should contain one or more [types.ContentPart] values (text, images, audio, etc.) that will be sent directly to the LLM.
After all pending tools have been resolved (via SendToolResult, SendToolResultMultimodal, or RejectClientTool), call Conversation.Resume to continue the pipeline.
func (*Conversation) SendVideoChunk
Section titled “func (*Conversation) SendVideoChunk”func (c *Conversation) SendVideoChunk(ctx context.Context, chunk *session.VideoChunk) errorSendVideoChunk sends a video chunk in duplex mode for encoded video streaming. Only available when the conversation was opened with OpenDuplex().
Example:
chunk := &session.VideoChunk{ Data: h264Data, MIMEType: "video/h264", IsKeyFrame: true, Timestamp: time.Now(),}conv.SendVideoChunk(ctx, chunk)func (*Conversation) SessionError
Section titled “func (*Conversation) SessionError”func (c *Conversation) SessionError() errorSessionError returns any error from the duplex session. Only available when the conversation was opened with OpenDuplex(). Note: This is named SessionError to avoid conflict with the Error interface method.
func (*Conversation) SetVar
Section titled “func (*Conversation) SetVar”func (c *Conversation) SetVar(name, value string)SetVar sets a single template variable.
Variables are substituted into the system prompt template:
conv.SetVar("customer_name", "Alice")// Template: "You are helping {{customer_name}}"// Becomes: "You are helping Alice"func (*Conversation) SetVars
Section titled “func (*Conversation) SetVars”func (c *Conversation) SetVars(vars map[string]any)SetVars sets multiple template variables at once.
conv.SetVars(map[string]any{ "customer_name": "Alice", "customer_tier": "premium", "max_discount": 20,})func (*Conversation) SetVarsFromEnv
Section titled “func (*Conversation) SetVarsFromEnv”func (c *Conversation) SetVarsFromEnv(prefix string)SetVarsFromEnv sets variables from environment variables with a given prefix.
Environment variables matching the prefix are added as template variables with the prefix stripped and converted to lowercase:
// If PROMPTKIT_CUSTOMER_NAME=Alice is set:conv.SetVarsFromEnv("PROMPTKIT_")// Sets variable "customer_name" = "Alice"func (*Conversation) Stream
Section titled “func (*Conversation) Stream”func (c *Conversation) Stream(ctx context.Context, message any, opts ...SendOption) <-chan StreamChunkStream sends a message and returns a channel of response chunks.
Use this for real-time streaming of LLM responses:
for chunk := range conv.Stream(ctx, "Tell me a story") { if chunk.Error != nil { log.Printf("Error: %v", chunk.Error) break } fmt.Print(chunk.Text)}The channel is closed when the response is complete or an error occurs. The final chunk (Type == ChunkDone) contains the complete Response.
func (*Conversation) StreamRaw
Section titled “func (*Conversation) StreamRaw”func (c *Conversation) StreamRaw(ctx context.Context, message any) (<-chan streamPkg.Chunk, error)StreamRaw returns a channel of streaming chunks for use with the stream package. This is a lower-level API that returns stream.Chunk types.
Most users should use Conversation.Stream instead. StreamRaw is useful when working with [stream.Process] or [stream.CollectText].
err := stream.Process(ctx, conv, "Hello", func(chunk stream.Chunk) error { fmt.Print(chunk.Text) return nil})func (*Conversation) StreamWithCallback
Section titled “func (*Conversation) StreamWithCallback”func (c *Conversation) StreamWithCallback(ctx context.Context, message any, opts ...SendOption) (*Response, error)StreamWithCallback sends a message and invokes the registered StreamEventHandler for each chunk. This is a convenience wrapper around Conversation.Stream that translates chunks into typed events.
If no handler has been registered via Conversation.OnStreamEvent, this behaves like Stream() but discards all chunks and returns the final response.
Returns the complete Response or an error.
func (*Conversation) ToolRegistry
Section titled “func (*Conversation) ToolRegistry”func (c *Conversation) ToolRegistry() *tools.RegistryToolRegistry returns the underlying tool registry.
This is a power-user method for direct registry access. Tool descriptors are loaded from the pack; this allows inspecting them or registering custom executors.
registry := conv.ToolRegistry().(*tools.Registry)for _, desc := range registry.Descriptors() { fmt.Printf("Tool: %s\n", desc.Name)}func (*Conversation) TriggerStart
Section titled “func (*Conversation) TriggerStart”func (c *Conversation) TriggerStart(ctx context.Context, message string) errorTriggerStart sends a text message to make the model initiate the conversation. Use this in ASM mode when you want the model to speak first (e.g., introducing itself). Only available when the conversation was opened with OpenDuplex().
Example:
conv, _ := sdk.OpenDuplex("./assistant.pack.json", "interviewer", ...)// Start processing responses firstgo processResponses(conv.Response())// Trigger the model to beginconv.TriggerStart(ctx, "Please introduce yourself and begin the interview.")type CredentialOption
Section titled “type CredentialOption”CredentialOption configures credentials for a provider.
type CredentialOption interface { // contains filtered or unexported methods}func WithCredentialAPIKey
Section titled “func WithCredentialAPIKey”func WithCredentialAPIKey(key string) CredentialOptionWithCredentialAPIKey sets an explicit API key.
func WithCredentialEnv
Section titled “func WithCredentialEnv”func WithCredentialEnv(envVar string) CredentialOptionWithCredentialEnv sets an environment variable name for the credential.
func WithCredentialFile
Section titled “func WithCredentialFile”func WithCredentialFile(path string) CredentialOptionWithCredentialFile sets a credential file path.
type EndpointResolver
Section titled “type EndpointResolver”EndpointResolver determines the A2A endpoint URL for a given agent member. Implementations can provide static URLs, service-discovery lookups, or test-friendly mock endpoints.
type EndpointResolver interface { // Resolve returns the base URL (e.g. "http://localhost:9000") for the // named agent member. An empty string means the agent has no reachable // endpoint and should be skipped. Resolve(agentName string) string}type EvaluateOpts
Section titled “type EvaluateOpts”EvaluateOpts configures standalone eval execution.
type EvaluateOpts struct {
// PackPath loads a PromptPack from the filesystem. PackPath string
// PackData parses a PromptPack from raw JSON bytes (e.g. from an API or config store). PackData []byte
// EvalDefs provides pre-resolved eval definitions directly, bypassing pack loading. EvalDefs []evals.EvalDef
// PromptName selects which prompt's evals to merge with pack-level evals. // Only used with PackPath or PackData. If empty, only pack-level evals run. PromptName string
// Messages is the conversation history to evaluate. Messages []types.Message
// SessionID identifies the session for sampling determinism and result attribution. SessionID string
// TurnIndex is the current turn index (0-based) for per-turn trigger filtering. TurnIndex int
// EvalGroups selects which eval groups to execute. // Each EvalDef can belong to one or more groups; evals with no explicit // groups belong to the "default" group. When set, only evals with at least // one matching group run. If nil, all evals run regardless of group. EvalGroups []string
// Trigger selects which eval trigger class to execute. // If empty, defaults to TriggerEveryTurn. Trigger evals.EvalTrigger
// JudgeProvider provides a pre-built judge for LLM judge evals. // Takes precedence over JudgeTargets. JudgeProvider any
// JudgeTargets provides provider specs for LLM judge evals (Arena-style path). // The map keys are judge names; the SDK creates SpecJudgeProvider instances. JudgeTargets map[string]any
// TracerProvider enables OpenTelemetry trace emission for eval results. // When set, an OTelEventListener is automatically wired to the EventBus, // producing spans named "promptkit.eval.{evalID}" with GenAI SIG attributes. // An EventBus is created automatically if not provided. TracerProvider trace.TracerProvider
// EventBus enables eval event emission (eval.completed / eval.failed). // If nil and TracerProvider is set, a bus is created automatically. EventBus events.Bus
// Logger is used for structured logging. If nil, the default logger is used. Logger *slog.Logger
// RuntimeConfigPath loads exec eval handlers from a RuntimeConfig YAML file. // Exec eval bindings in the config are registered in the eval type registry, // enabling external subprocess evals (Python, Node.js, etc.) to run seamlessly. // If Registry is also provided, the exec handlers are registered into it. RuntimeConfigPath string
// MetricsCollector enables Prometheus eval metrics using the unified // Collector, mirroring the WithMetrics() pattern from the conversation API. // When set, the SDK calls Bind(MetricsInstanceLabels) internally and uses // the resulting MetricContext as the recorder. // Takes precedence over MetricRecorder. MetricsCollector *metrics.Collector
// MetricsInstanceLabels provides per-invocation label values for the // MetricsCollector. Keys must match the InstanceLabels declared on the // Collector. If the Collector has no InstanceLabels, pass nil. MetricsInstanceLabels map[string]string
// MetricRecorder records eval results as metrics (e.g. Prometheus gauges, // counters, histograms) based on Metric definitions in each EvalDef. // If nil, no metrics are recorded. // Prefer MetricsCollector for new code — MetricRecorder is useful when // you already have a custom recorder implementation. MetricRecorder evals.MetricRecorder
// Registry overrides the default eval type registry. // If nil, a registry with all built-in handlers is created. Registry *evals.EvalTypeRegistry
// Timeout overrides the per-eval execution timeout. // If zero, the default (30s) is used. Timeout time.Duration
// SkipSchemaValidation disables JSON schema validation when loading from PackPath. SkipSchemaValidation bool}type InMemoryA2ATaskStore
Section titled “type InMemoryA2ATaskStore”InMemoryA2ATaskStore is a concurrency-safe in-memory TaskStore.
type InMemoryA2ATaskStore = a2aserver.InMemoryTaskStoretype LocalAgentExecutor
Section titled “type LocalAgentExecutor”LocalAgentExecutor routes A2A tool calls to in-process Conversations instead of making remote HTTP calls. It implements tools.Executor.
type LocalAgentExecutor struct { // contains filtered or unexported fields}func NewLocalAgentExecutor
Section titled “func NewLocalAgentExecutor”func NewLocalAgentExecutor(members map[string]*Conversation) *LocalAgentExecutorNewLocalAgentExecutor creates an executor that routes tool calls to local conversations.
func (*LocalAgentExecutor) Execute
Section titled “func (*LocalAgentExecutor) Execute”func (e *LocalAgentExecutor) Execute(ctx context.Context, descriptor *tools.ToolDescriptor, args json.RawMessage) (json.RawMessage, error)Execute routes a tool call to the corresponding member conversation. It parses {“query”:”…”} from args, calls member.Send(), and returns {“response”:”…”}.
func (*LocalAgentExecutor) Name
Section titled “func (*LocalAgentExecutor) Name”func (e *LocalAgentExecutor) Name() stringName returns the executor name. Must be “a2a” to intercept A2A tool dispatches.
type MCPServerBuilder
Section titled “type MCPServerBuilder”MCPServerBuilder provides a fluent interface for configuring MCP servers.
type MCPServerBuilder struct { // contains filtered or unexported fields}func NewMCPServer
Section titled “func NewMCPServer”func NewMCPServer(name, command string, args ...string) *MCPServerBuilderNewMCPServer creates a new MCP server configuration builder.
server := sdk.NewMCPServer("github", "npx", "@modelcontextprotocol/server-github"). WithEnv("GITHUB_TOKEN", os.Getenv("GITHUB_TOKEN"))
conv, _ := sdk.Open("./assistant.pack.json", "assistant", sdk.WithMCPServer(server),)func (*MCPServerBuilder) Build
Section titled “func (*MCPServerBuilder) Build”func (b *MCPServerBuilder) Build() mcp.ServerConfigBuild returns the configured server config.
func (*MCPServerBuilder) WithArgs
Section titled “func (*MCPServerBuilder) WithArgs”func (b *MCPServerBuilder) WithArgs(args ...string) *MCPServerBuilderWithArgs appends additional arguments to the MCP server command.
func (*MCPServerBuilder) WithEnv
Section titled “func (*MCPServerBuilder) WithEnv”func (b *MCPServerBuilder) WithEnv(key, value string) *MCPServerBuilderWithEnv adds an environment variable to the MCP server.
func (*MCPServerBuilder) WithTimeout
Section titled “func (*MCPServerBuilder) WithTimeout”func (b *MCPServerBuilder) WithTimeout(ms int) *MCPServerBuilderWithTimeout sets the per-request timeout in milliseconds for the MCP server.
func (*MCPServerBuilder) WithToolFilter
Section titled “func (*MCPServerBuilder) WithToolFilter”func (b *MCPServerBuilder) WithToolFilter(filter *mcp.ToolFilter) *MCPServerBuilderWithToolFilter sets a tool filter that controls which tools from this server are exposed to the LLM. Only tools passing the filter are registered.
func (*MCPServerBuilder) WithWorkingDir
Section titled “func (*MCPServerBuilder) WithWorkingDir”func (b *MCPServerBuilder) WithWorkingDir(dir string) *MCPServerBuilderWithWorkingDir sets the working directory for the MCP server process.
type MapEndpointResolver
Section titled “type MapEndpointResolver”MapEndpointResolver maps each agent name to a specific endpoint URL. Unknown agents return an empty string and are silently skipped.
type MapEndpointResolver struct { Endpoints map[string]string}func (*MapEndpointResolver) Resolve
Section titled “func (*MapEndpointResolver) Resolve”func (r *MapEndpointResolver) Resolve(agentName string) stringResolve returns the endpoint URL for the given agent name, or empty string if not found.
type MemoryCapability
Section titled “type MemoryCapability”MemoryCapability registers memory tools and wires the memory executor.
type MemoryCapability struct { // contains filtered or unexported fields}func NewMemoryCapability
Section titled “func NewMemoryCapability”func NewMemoryCapability(store memory.Store, scope map[string]string) *MemoryCapabilityNewMemoryCapability creates a MemoryCapability with the given store and scope.
func (*MemoryCapability) Close
Section titled “func (*MemoryCapability) Close”func (c *MemoryCapability) Close() errorClose implements Capability.
func (*MemoryCapability) Init
Section titled “func (*MemoryCapability) Init”func (c *MemoryCapability) Init(_ CapabilityContext) errorInit implements Capability.
func (*MemoryCapability) Name
Section titled “func (*MemoryCapability) Name”func (c *MemoryCapability) Name() stringName implements Capability.
func (*MemoryCapability) RegisterTools
Section titled “func (*MemoryCapability) RegisterTools”func (c *MemoryCapability) RegisterTools(registry *tools.Registry)RegisterTools implements Capability. Registers the memory executor and tool descriptors, plus any custom tools from ToolProvider stores.
func (*MemoryCapability) WithExtractor
Section titled “func (*MemoryCapability) WithExtractor”func (c *MemoryCapability) WithExtractor(e memory.Extractor) *MemoryCapabilityWithExtractor sets the memory extractor for automatic extraction.
func (*MemoryCapability) WithRetriever
Section titled “func (*MemoryCapability) WithRetriever”func (c *MemoryCapability) WithRetriever(r memory.Retriever) *MemoryCapabilityWithRetriever sets the memory retriever for automatic RAG injection.
type MemoryOption
Section titled “type MemoryOption”MemoryOption configures the memory capability.
type MemoryOption func(*MemoryCapability)func WithMemoryExtractor
Section titled “func WithMemoryExtractor”func WithMemoryExtractor(e memory.Extractor) MemoryOptionWithMemoryExtractor sets an extractor for automatic memory extraction.
func WithMemoryRetriever
Section titled “func WithMemoryRetriever”func WithMemoryRetriever(r memory.Retriever) MemoryOptionWithMemoryRetriever sets a retriever for automatic RAG injection.
type MultiAgentSession
Section titled “type MultiAgentSession”MultiAgentSession manages a set of agent member conversations orchestrated through an entry conversation. Tool calls from the entry agent to member agents are routed in-process via LocalAgentExecutor.
type MultiAgentSession struct { // contains filtered or unexported fields}func OpenMultiAgent
Section titled “func OpenMultiAgent”func OpenMultiAgent(packPath string, opts ...Option) (*MultiAgentSession, error)OpenMultiAgent loads a multi-agent pack and creates conversations for all members and the entry agent. Agent-to-agent tool calls from the entry conversation are routed in-process via LocalAgentExecutor.
The pack must have an agents section with entry and members defined. Options are applied to all conversations (entry and members).
func (*MultiAgentSession) Close
Section titled “func (*MultiAgentSession) Close”func (s *MultiAgentSession) Close() errorClose closes all conversations (entry and members). Errors from individual Close calls are collected and returned via errors.Join.
func (*MultiAgentSession) Entry
Section titled “func (*MultiAgentSession) Entry”func (s *MultiAgentSession) Entry() *ConversationEntry returns the entry conversation.
func (*MultiAgentSession) Members
Section titled “func (*MultiAgentSession) Members”func (s *MultiAgentSession) Members() map[string]*ConversationMembers returns the member conversations (excluding entry).
func (*MultiAgentSession) Send
Section titled “func (*MultiAgentSession) Send”func (s *MultiAgentSession) Send(ctx context.Context, message any, opts ...SendOption) (*Response, error)Send sends a message through the entry agent.
type Option
Section titled “type Option”Option configures a Conversation.
type Option func(*config) errorfunc WithA2AAgent
Section titled “func WithA2AAgent”func WithA2AAgent(builder *A2AAgentBuilder) OptionWithA2AAgent registers an A2A agent using the builder pattern. The agent’s skills are discovered at pipeline build time and registered as tools.
agent := sdk.NewA2AAgent("https://agent.example.com"). WithAuth("Bearer", os.Getenv("AGENT_TOKEN"))
conv, _ := sdk.Open("./assistant.pack.json", "assistant", sdk.WithA2AAgent(agent),)func WithA2ATools
Section titled “func WithA2ATools”func WithA2ATools(bridge *a2a.ToolBridge) OptionWithA2ATools registers tools from an A2A [a2a.ToolBridge] so the LLM can call remote A2A agents as tools.
The bridge must have already discovered agents via [a2a.ToolBridge.RegisterAgent]. Each agent skill becomes a tool with Mode “a2a” in the tool registry.
Example:
client := a2a.NewClient("https://agent.example.com")bridge := a2a.NewToolBridge(client)bridge.RegisterAgent(ctx)
conv, _ := sdk.Open("./assistant.pack.json", "assistant", sdk.WithA2ATools(bridge),)func WithAPIKey
Section titled “func WithAPIKey”func WithAPIKey(key string) OptionWithAPIKey provides an explicit API key instead of reading from environment.
conv, _ := sdk.Open("./chat.pack.json", "assistant", sdk.WithAPIKey(os.Getenv("MY_CUSTOM_KEY")),)func WithAgentEndpoints
Section titled “func WithAgentEndpoints”func WithAgentEndpoints(resolver EndpointResolver) OptionWithAgentEndpoints configures endpoint resolution for multi-agent tool routing.
When a pack has an agents section, prompts can reference other agent members as tools. This option tells the SDK how to resolve agent names to A2A endpoint URLs so that tool calls are routed to the correct agent.
Example with a single gateway:
conv, _ := sdk.Open("./multiagent.pack.json", "orchestrator", sdk.WithAgentEndpoints(&sdk.StaticEndpointResolver{ BaseURL: "http://localhost:9000", }),)Example with per-agent endpoints:
conv, _ := sdk.Open("./multiagent.pack.json", "orchestrator", sdk.WithAgentEndpoints(&sdk.MapEndpointResolver{ Endpoints: map[string]string{ "summarizer": "http://summarizer:9001", "translator": "http://translator:9002", }, }),)func WithAutoResize
Section titled “func WithAutoResize”func WithAutoResize(maxWidth, maxHeight int) OptionWithAutoResize is a convenience option that enables image resizing with the specified dimensions. Use this for simple cases; use WithImagePreprocessing for full control.
Example:
conv, _ := sdk.Open("./chat.pack.json", "vision-assistant", sdk.WithAutoResize(1024, 1024), // Max 1024x1024)func WithAutoSummarize
Section titled “func WithAutoSummarize”func WithAutoSummarize(provider providers.Provider, threshold, batchSize int) OptionWithAutoSummarize enables automatic summarization of old conversation turns.
When the message count exceeds the threshold, the oldest unsummarized batch of messages is compressed into a summary using the provided LLM provider. Summaries are prepended to the context as system messages.
A separate, cheaper provider can be used for summarization (e.g., a smaller model).
conv, _ := sdk.Open("./chat.pack.json", "assistant", sdk.WithStateStore(store), sdk.WithContextWindow(20), sdk.WithAutoSummarize(summaryProvider, 100, 50), // Summarize after 100 msgs, 50 at a time)func WithAzure
Section titled “func WithAzure”func WithAzure(endpoint, providerType, model string, opts ...PlatformOption) OptionWithAzure configures Azure AI services as the hosting platform. The providerType specifies the provider factory (e.g., “openai”) and model is the model identifier. This uses the Azure SDK default credential chain (Managed Identity, Azure CLI, etc.).
conv, _ := sdk.Open("./chat.pack.json", "assistant", sdk.WithAzure("https://my-resource.openai.azure.com", "openai", "gpt-4o"),)func WithBedrock
Section titled “func WithBedrock”func WithBedrock(region, providerType, model string, opts ...PlatformOption) OptionWithBedrock configures AWS Bedrock as the hosting platform. The providerType specifies the provider factory (e.g., “claude”, “openai”) and model is the model identifier. This uses the AWS SDK default credential chain (IRSA, instance profile, env vars).
conv, _ := sdk.Open("./chat.pack.json", "assistant", sdk.WithBedrock("us-west-2", "claude", "claude-sonnet-4-20250514"),)func WithCapability
Section titled “func WithCapability”func WithCapability(capability Capability) OptionWithCapability adds an explicit platform capability.
Capabilities provide namespaced tools that are automatically injected into conversations. Most capabilities are auto-inferred from pack structure (e.g., workflow capability from pack.Workflow). Use this for explicit configuration or custom capabilities.
conv, _ := sdk.Open("./assistant.pack.json", "chat", sdk.WithCapability(sdk.NewWorkflowCapability()),)func WithCompaction
Section titled “func WithCompaction”func WithCompaction(enabled bool) OptionWithCompaction controls context compaction in tool loops. When enabled (default), stale tool results are folded between rounds to prevent context overflow. The budget is auto-detected from the provider’s context window or defaults to 128K tokens. Pass false to disable.
func WithCompactionRules
Section titled “func WithCompactionRules”func WithCompactionRules(rules ...stage.CompactionRule) OptionWithCompactionRules configures the default ContextCompactor with custom rules. Rules are applied in order; first match wins. This replaces the default rules (FoldToolResults). This is mutually exclusive with WithCompactionStrategy — the last one set wins.
func WithCompactionStrategy
Section titled “func WithCompactionStrategy”func WithCompactionStrategy(strategy stage.CompactionStrategy) OptionWithCompactionStrategy replaces the default context compactor with a custom CompactionStrategy implementation. This is mutually exclusive with WithCompactionRules — the last one set wins.
func WithContextCarryForward
Section titled “func WithContextCarryForward”func WithContextCarryForward() OptionWithContextCarryForward enables context carry-forward for workflow transitions.
When enabled, transitioning to a new state injects a summary of the previous state’s conversation into the new conversation via the {{workflow_context}} template variable. This provides continuity across workflow states.
Default: disabled (each state gets a fresh conversation).
wc, _ := sdk.OpenWorkflow("./support.pack.json", sdk.WithContextCarryForward(),)func WithContextRetrieval
Section titled “func WithContextRetrieval”func WithContextRetrieval(embeddingProvider providers.EmbeddingProvider, topK int) OptionWithContextRetrieval enables semantic search for relevant older messages.
When configured alongside WithContextWindow, the pipeline uses the embedding provider to find messages outside the hot window that are semantically similar to the current user message. These retrieved messages are inserted chronologically between summaries and the hot window.
Requires WithContextWindow to be set.
embProvider, _ := openai.NewEmbeddingProvider()conv, _ := sdk.Open("./chat.pack.json", "assistant", sdk.WithStateStore(store), sdk.WithContextWindow(20), sdk.WithContextRetrieval(embProvider, 5), // Retrieve top 5 relevant messages)func WithContextWindow
Section titled “func WithContextWindow”func WithContextWindow(recentMessages int) OptionWithContextWindow sets the hot window size for RAG context assembly.
When set to a positive value, the pipeline uses ContextAssemblyStage and IncrementalSaveStage instead of loading all history on every turn. This dramatically reduces I/O for long conversations by only loading the most recent N messages.
Requires a state store (WithStateStore). The store’s MessageReader and MessageAppender interfaces are used when available, with automatic fallback to full Load/Save when they’re not.
conv, _ := sdk.Open("./chat.pack.json", "assistant", sdk.WithStateStore(store), sdk.WithContextWindow(20), // Keep last 20 messages in hot window)func WithConversationID
Section titled “func WithConversationID”func WithConversationID(id string) OptionWithConversationID sets the conversation identifier.
If not set, a unique ID is auto-generated. Set this when you want to use a specific ID for state persistence or tracking.
conv, _ := sdk.Open("./chat.pack.json", "assistant", sdk.WithStateStore(store), sdk.WithConversationID("user-123-session-456"),)func WithCredential
Section titled “func WithCredential”func WithCredential(opts ...CredentialOption) OptionWithCredential configures advanced credential resolution for the provider.
This is the advanced form of credential configuration. For simple API key usage, prefer WithAPIKey. When both WithCredential and WithAPIKey are set, WithCredential takes precedence.
Credential resolution priority: direct API key > environment variable > file.
Example with environment variable:
conv, _ := sdk.Open("./chat.pack.json", "assistant", sdk.WithCredential(sdk.WithCredentialEnv("MY_SERVICE_API_KEY")),)Example with file:
conv, _ := sdk.Open("./chat.pack.json", "assistant", sdk.WithCredential(sdk.WithCredentialFile("/run/secrets/api-key")),)Example with direct key (equivalent to WithAPIKey):
conv, _ := sdk.Open("./chat.pack.json", "assistant", sdk.WithCredential(sdk.WithCredentialAPIKey("sk-...")),)func WithEvalGroups
Section titled “func WithEvalGroups”func WithEvalGroups(groups ...string) OptionWithEvalGroups selects which eval groups to execute during the conversation.
Each EvalDef can belong to one or more groups via its Groups field. Evals with no explicit groups belong to the “default” group. When groups are specified, only evals with at least one matching group run. If not set (nil), all evals run regardless of group.
func WithEvalRegistry
Section titled “func WithEvalRegistry”func WithEvalRegistry(r *evals.EvalTypeRegistry) OptionWithEvalRegistry provides a custom eval type registry.
Use this to register custom eval type handlers beyond the built-in ones. If not set, the default registry with all built-in handlers is used.
func WithEvalRunner
Section titled “func WithEvalRunner”func WithEvalRunner(r *evals.EvalRunner) OptionWithEvalRunner configures the eval runner for executing evals in-process.
Eval results are emitted as events on the EventBus (eval.completed / eval.failed). If no runner is provided and eval definitions exist in the pack, a default runner is created automatically using the configured eval registry.
Example:
registry := evals.NewEvalTypeRegistry()runner := evals.NewEvalRunner(registry)
conv, _ := sdk.Open("./chat.pack.json", "assistant", sdk.WithEvalRunner(runner),)func WithEvalsDisabled
Section titled “func WithEvalsDisabled”func WithEvalsDisabled() OptionWithEvalsDisabled disables eval execution even when eval definitions exist in the pack. Use this to temporarily suppress evals without removing definitions.
func WithEventBus
Section titled “func WithEventBus”func WithEventBus(bus events.Bus) OptionWithEventBus provides a shared event bus for observability.
When set, the conversation emits events to this bus. Use this to share an event bus across multiple conversations for centralized logging, metrics, or debugging.
bus := events.NewEventBus()bus.SubscribeAll(myMetricsCollector)
conv1, _ := sdk.Open("./chat.pack.json", "assistant", sdk.WithEventBus(bus))conv2, _ := sdk.Open("./chat.pack.json", "assistant", sdk.WithEventBus(bus))func WithEventStore
Section titled “func WithEventStore”func WithEventStore(store events.EventStore) OptionWithEventStore configures event persistence for session recording.
When set, all events published through the conversation’s event bus are automatically persisted to the store. This enables session replay and analysis.
The event store is automatically attached to the event bus. If no event bus is provided via WithEventBus, a new one is created internally.
Example with file-based storage:
store, _ := events.NewFileEventStore("/var/log/sessions")defer store.Close()
conv, _ := sdk.Open("./chat.pack.json", "assistant", sdk.WithEventStore(store),)Example with shared bus and store:
store, _ := events.NewFileEventStore("/var/log/sessions")bus := events.NewEventBus()bus.SubscribeAll(store.OnEvent)
conv, _ := sdk.Open("./chat.pack.json", "assistant", sdk.WithEventBus(bus),)func WithExecutionTimeout
Section titled “func WithExecutionTimeout”func WithExecutionTimeout(d time.Duration) OptionWithExecutionTimeout overrides the default pipeline execution timeout (30s). Use this for pipelines that need more time, such as multi-round tool-calling with slower providers like Ollama. Pass 0 to disable the timeout entirely.
conv, _ := sdk.Open("./chat.pack.json", "assistant", sdk.WithExecutionTimeout(120 * time.Second),)func WithImagePreprocessing
Section titled “func WithImagePreprocessing”func WithImagePreprocessing(cfg *stage.ImagePreprocessConfig) OptionWithImagePreprocessing enables automatic image preprocessing before sending to the LLM. This resizes large images to fit within provider limits, reducing token usage and preventing errors.
The default configuration resizes images to max 1024x1024 with 85% quality.
Example with defaults:
conv, _ := sdk.Open("./chat.pack.json", "vision-assistant", sdk.WithImagePreprocessing(nil), // Use default settings)Example with custom config:
conv, _ := sdk.Open("./chat.pack.json", "vision-assistant", sdk.WithImagePreprocessing(&stage.ImagePreprocessConfig{ Resize: stage.ImageResizeStageConfig{ MaxWidth: 2048, MaxHeight: 2048, Quality: 90, }, EnableResize: true, }),)func WithJSONMode
Section titled “func WithJSONMode”func WithJSONMode() OptionWithJSONMode is a convenience option that enables simple JSON output mode. The model will return valid JSON objects but without schema enforcement. Use WithResponseFormat for more control including schema validation.
Example:
conv, _ := sdk.Open("./chat.pack.json", "assistant", sdk.WithJSONMode(),)resp, _ := conv.Send(ctx, "List 3 colors as JSON")// Response: {"colors": ["red", "green", "blue"]}func WithJudgeProvider
Section titled “func WithJudgeProvider”func WithJudgeProvider(jp handlers.JudgeProvider) OptionWithJudgeProvider configures the LLM judge provider for judge-based evals.
If not set, an SDKJudgeProvider is created automatically using the conversation’s provider.
func WithLogger
Section titled “func WithLogger”func WithLogger(l *slog.Logger) OptionWithLogger sets a custom *slog.Logger for the SDK. This replaces the process-wide default logger, so all PromptKit components (runtime, pipeline, providers, evals) will use it.
Since all major Go logging libraries ship slog adapters (e.g. zapslog, slogzerolog), this gives full control over the logging backend without requiring a custom interface.
conv, _ := sdk.Open("./chat.pack.json", "assistant", sdk.WithLogger(slog.New(slog.NewJSONHandler(os.Stdout, nil))),)Note: only the first logger set via WithLogger takes effect process-wide. Subsequent calls are silently ignored due to sync.Once in setLoggerOnce.
func WithMCP
Section titled “func WithMCP”func WithMCP(name, command string, args ...string) OptionWithMCP adds an MCP (Model Context Protocol) server for tool execution.
MCP servers provide external tools that can be called by the LLM. The server is started automatically when the conversation opens and stopped when the conversation is closed.
Basic usage:
conv, _ := sdk.Open("./assistant.pack.json", "assistant", sdk.WithMCP("filesystem", "npx", "@modelcontextprotocol/server-filesystem", "/path"),)With environment variables:
conv, _ := sdk.Open("./assistant.pack.json", "assistant", sdk.WithMCP("github", "npx", "@modelcontextprotocol/server-github"). WithEnv("GITHUB_TOKEN", os.Getenv("GITHUB_TOKEN")),)Multiple servers:
conv, _ := sdk.Open("./assistant.pack.json", "assistant", sdk.WithMCP("filesystem", "npx", "@modelcontextprotocol/server-filesystem", "/path"), sdk.WithMCP("memory", "npx", "@modelcontextprotocol/server-memory"),)Security: Trust Boundary
Section titled “Security: Trust Boundary”The command and args are executed as a subprocess (via stdio-based MCP transport). Commands are not sandboxed or validated. The caller is responsible for ensuring that command values come from trusted sources. Untrusted input should never be passed as the command or args parameters.
func WithMCPServer
Section titled “func WithMCPServer”func WithMCPServer(builder *MCPServerBuilder) OptionWithMCPServer adds a pre-configured MCP server.
server := sdk.NewMCPServer("github", "npx", "@modelcontextprotocol/server-github"). WithEnv("GITHUB_TOKEN", os.Getenv("GITHUB_TOKEN"))
conv, _ := sdk.Open("./assistant.pack.json", "assistant", sdk.WithMCPServer(server),)func WithMaxActiveSkillsOption
Section titled “func WithMaxActiveSkillsOption”func WithMaxActiveSkillsOption(n int) OptionWithMaxActiveSkillsOption sets the maximum number of concurrently active skills. Default is 5 if not set.
conv, _ := sdk.Open("./assistant.pack.json", "chat", sdk.WithMaxActiveSkillsOption(10),)func WithMaxConcurrentEvals
Section titled “func WithMaxConcurrentEvals”func WithMaxConcurrentEvals(n int) OptionWithMaxConcurrentEvals sets the maximum number of concurrent eval goroutines.
Each Send() call dispatches turn evals asynchronously. This option limits how many eval goroutines can run concurrently. If the limit is reached, additional eval dispatches are skipped (non-blocking) to prevent unbounded goroutine growth. The default is DefaultMaxConcurrentEvals (10).
func WithMaxMessageSize
Section titled “func WithMaxMessageSize”func WithMaxMessageSize(bytes int) OptionWithMaxMessageSize sets the maximum allowed user message size in bytes.
When a message exceeds this limit, Send() and Stream() return ErrMessageTooLarge. The default is 10 MB.
conv, _ := sdk.Open("./chat.pack.json", "assistant", sdk.WithMaxMessageSize(1024 * 1024), // 1 MB limit)func WithMemory
Section titled “func WithMemory”func WithMemory(store memory.Store, scope map[string]string, opts ...MemoryOption) OptionWithMemory enables agentic memory — cross-session knowledge that persists beyond a single conversation. The store provides persistence; scope keys determine memory isolation (e.g., {“user_id”: “x”, “workspace_id”: “y”}).
Optional: pass extractor for automatic memory extraction from conversations, and/or retriever for automatic RAG injection of relevant memories.
func WithMessageLog
Section titled “func WithMessageLog”func WithMessageLog(log statestore.MessageLog) OptionWithMessageLog enables per-round write-through persistence during tool loops. When configured, messages are appended to the log after each tool-loop round completes, so they survive process crashes without waiting for the pipeline’s save stage. The save stage skips message append when a MessageLog is active.
The store must implement [statestore.MessageLog]. MemoryStore implements it by default. Pass nil to disable.
func WithMetricRecorder
Section titled “func WithMetricRecorder”func WithMetricRecorder(r evals.MetricRecorder) OptionWithMetricRecorder configures a MetricRecorder for the eval middleware.
When set, eval results are automatically recorded as Prometheus metrics based on the Metric definition in each EvalDef. This is the conversation equivalent of EvaluateOpts.MetricRecorder — it wires metric recording into the live eval middleware that runs on every Send() and Close().
Deprecated: Use WithMetrics with metrics.NewCollector instead.
func WithMetrics
Section titled “func WithMetrics”func WithMetrics(collector *metrics.Collector, instanceLabels map[string]string) OptionWithMetrics attaches a unified metrics Collector to this conversation. The Collector records both pipeline operational metrics (provider calls, tokens, cost, tool calls, pipeline duration, validations) and eval result metrics into a Prometheus registry.
instanceLabels provides values for the InstanceLabels declared on the Collector. If the Collector has no InstanceLabels, pass nil.
The Collector is created once per process; each conversation binds its own instance label values via this option:
collector := metrics.NewCollector(metrics.CollectorOpts{ Registerer: reg, Namespace: "myapp", InstanceLabels: []string{"tenant"},})
conv, _ := sdk.Open(pack, prompt, sdk.WithMetrics(collector, map[string]string{"tenant": "acme"}),)func WithModel
Section titled “func WithModel”func WithModel(model string) OptionWithModel overrides the default model specified in the pack.
conv, _ := sdk.Open("./chat.pack.json", "assistant", sdk.WithModel("gpt-4o"),)func WithProvider
Section titled “func WithProvider”func WithProvider(p providers.Provider) OptionWithProvider uses a custom provider instance.
This bypasses auto-detection and uses the provided provider directly. Use this for custom provider implementations or when you need full control over provider configuration.
provider := openai.NewProvider(openai.Config{...})conv, _ := sdk.Open("./chat.pack.json", "assistant", sdk.WithProvider(provider),)func WithProviderHook
Section titled “func WithProviderHook”func WithProviderHook(h hooks.ProviderHook) OptionWithProviderHook registers a provider hook for intercepting LLM calls.
Provider hooks run synchronously before and after each LLM call. Multiple hooks are executed in order; the first deny short-circuits.
conv, _ := sdk.Open("./chat.pack.json", "assistant", sdk.WithProviderHook(hooks.NewBannedWords([]string{"secret"})),)func WithRecording
Section titled “func WithRecording”func WithRecording(cfg *RecordingConfig) OptionWithRecording enables session recording by inserting RecordingStages into the pipeline. These stages capture full binary content and publish events directly to the EventBus, bypassing the emitter’s binary stripping.
If cfg is nil, default settings are used (audio=true, video=false, images=true). An EventBus is automatically created if none was provided via WithEventBus.
conv, _ := sdk.Open("./chat.pack.json", "assistant", sdk.WithRecording(nil), // use defaults)
// Or with custom config:conv, _ := sdk.Open("./chat.pack.json", "assistant", sdk.WithRecording(&sdk.RecordingConfig{ IncludeAudio: true, IncludeVideo: true, IncludeImages: true, }),)func WithRelevanceTruncation
Section titled “func WithRelevanceTruncation”func WithRelevanceTruncation(cfg *RelevanceConfig) OptionWithRelevanceTruncation configures embedding-based relevance truncation.
This automatically sets the truncation strategy to “relevance” and configures the embedding provider for semantic similarity scoring.
Example with OpenAI embeddings:
embProvider, _ := openai.NewEmbeddingProvider()conv, _ := sdk.Open("./chat.pack.json", "assistant", sdk.WithTokenBudget(8000), sdk.WithRelevanceTruncation(&sdk.RelevanceConfig{ EmbeddingProvider: embProvider, MinRecentMessages: 3, SimilarityThreshold: 0.3, }),)Example with Gemini embeddings:
embProvider, _ := gemini.NewEmbeddingProvider()conv, _ := sdk.Open("./chat.pack.json", "assistant", sdk.WithTokenBudget(8000), sdk.WithRelevanceTruncation(&sdk.RelevanceConfig{ EmbeddingProvider: embProvider, }),)func WithResponseFormat
Section titled “func WithResponseFormat”func WithResponseFormat(format *providers.ResponseFormat) OptionWithResponseFormat configures the LLM response format for JSON mode output. This instructs the model to return responses in the specified format.
For simple JSON object output:
conv, _ := sdk.Open("./chat.pack.json", "assistant", sdk.WithResponseFormat(&providers.ResponseFormat{ Type: providers.ResponseFormatJSON, }),)For structured JSON output with a schema:
schema := json.RawMessage(`{"type":"object","properties":{"name":{"type":"string"}}}`)conv, _ := sdk.Open("./chat.pack.json", "assistant", sdk.WithResponseFormat(&providers.ResponseFormat{ Type: providers.ResponseFormatJSONSchema, JSONSchema: schema, SchemaName: "person", Strict: true, }),)func WithRuntimeConfig
Section titled “func WithRuntimeConfig”func WithRuntimeConfig(path string) OptionWithRuntimeConfig loads a RuntimeConfig YAML file and applies its settings to the SDK conversation. This provides a declarative alternative to programmatic configuration via individual With* options.
RuntimeConfig sections are applied in order: providers, MCP servers, state store, logging, tools. Programmatic options applied after WithRuntimeConfig take precedence.
Example:
conv, err := sdk.Open("./agent.pack.json", "chat", sdk.WithRuntimeConfig("./runtime.yaml"),)func WithSessionHook
Section titled “func WithSessionHook”func WithSessionHook(h hooks.SessionHook) OptionWithSessionHook registers a session hook for tracking conversation lifecycle.
Session hooks are called on session start, after each turn, and on session end.
conv, _ := sdk.Open("./chat.pack.json", "assistant", sdk.WithSessionHook(mySessionLogger),)func WithSessionMetadata
Section titled “func WithSessionMetadata”func WithSessionMetadata(metadata map[string]any) OptionWithSessionMetadata attaches arbitrary key-value metadata to the session.
Metadata is persisted in the state store and available to capabilities via CapabilityContext. Common uses: tenant ID, channel, tier, A/B cohort.
conv, _ := sdk.Open("./chat.pack.json", "assistant", sdk.WithSessionMetadata(map[string]any{ "tenant": "acme-corp", "channel": "web-chat", }),)func WithShutdownManager
Section titled “func WithShutdownManager”func WithShutdownManager(mgr *ShutdownManager) OptionWithShutdownManager attaches a ShutdownManager to the conversation. When set, Open and OpenDuplex automatically register the conversation with the manager, and Conversation.Close automatically deregisters it.
mgr := sdk.NewShutdownManager()go sdk.GracefulShutdown(mgr, 30*time.Second)
conv, _ := sdk.Open("./chat.pack.json", "assistant", sdk.WithShutdownManager(mgr),)defer conv.Close()func WithSkillSelectorOption
Section titled “func WithSkillSelectorOption”func WithSkillSelectorOption(s skills.SkillSelector) OptionWithSkillSelectorOption sets the skill selector for filtering available skills. The selector determines which skills from the available set are presented to the model in the Phase 1 index.
conv, _ := sdk.Open("./assistant.pack.json", "chat", sdk.WithSkillSelectorOption(skills.NewTagSelector([]string{"coding"})),)func WithSkillsDir
Section titled “func WithSkillsDir”func WithSkillsDir(dir string) OptionWithSkillsDir adds a directory-based skill source. Skills are discovered by scanning for SKILL.md files in the directory. Multiple directories can be added by calling this option multiple times.
conv, _ := sdk.Open("./assistant.pack.json", "chat", sdk.WithSkillsDir("./skills"),)func WithSkipSchemaValidation
Section titled “func WithSkipSchemaValidation”func WithSkipSchemaValidation() OptionWithSkipSchemaValidation disables JSON schema validation during pack loading.
By default, packs are validated against the PromptPack JSON schema to ensure they are well-formed. Use this option to skip validation, for example when loading legacy packs or during development.
conv, _ := sdk.Open("./legacy.pack.json", "assistant", sdk.WithSkipSchemaValidation(),)func WithStateStore
Section titled “func WithStateStore”func WithStateStore(store statestore.Store) OptionWithStateStore configures persistent state storage.
When configured, conversation state (messages, metadata) is automatically persisted after each turn and can be resumed later via Resume.
store := statestore.NewRedisStore("redis://localhost:6379")conv, _ := sdk.Open("./chat.pack.json", "assistant", sdk.WithStateStore(store),)func WithStreamingConfig
Section titled “func WithStreamingConfig”func WithStreamingConfig(streamingConfig *providers.StreamingInputConfig) OptionWithStreamingConfig configures streaming for duplex mode. When set, enables ASM (Audio Streaming Model) mode with continuous bidirectional streaming. When nil (default), uses VAD (Voice Activity Detection) mode with turn-based streaming.
ASM mode is for models with native bidirectional audio support (e.g., gemini-2.0-flash-exp). VAD mode is for standard text-based models with audio transcription.
Example for ASM mode:
conv, _ := sdk.OpenDuplex("./assistant.pack.json", "voice-chat", sdk.WithStreamingConfig(&providers.StreamingInputConfig{ Type: types.ContentTypeAudio, SampleRate: 16000, Channels: 1, }),)func WithStreamingVideo
Section titled “func WithStreamingVideo”func WithStreamingVideo(cfg *VideoStreamConfig) OptionWithStreamingVideo enables realtime video/image streaming for duplex sessions. This is used for webcam feeds, screen sharing, and continuous frame analysis.
The FrameRateLimitStage is added to the pipeline when TargetFPS > 0, dropping frames to maintain the target frame rate for LLM processing.
Example with defaults (1 FPS):
session, _ := sdk.OpenDuplex("./assistant.pack.json", "vision-chat", sdk.WithStreamingVideo(nil), // Use default settings)Example with custom config:
session, _ := sdk.OpenDuplex("./assistant.pack.json", "vision-chat", sdk.WithStreamingVideo(&sdk.VideoStreamConfig{ TargetFPS: 2.0, // 2 frames per second MaxWidth: 1280, // Resize large frames MaxHeight: 720, Quality: 80, }),)Sending frames:
for frame := range webcam.Frames() { session.SendFrame(ctx, &session.ImageFrame{ Data: frame.JPEG(), MIMEType: "image/jpeg", Timestamp: time.Now(), })}func WithTTS
Section titled “func WithTTS”func WithTTS(service tts.Service) OptionWithTTS configures text-to-speech for the Pipeline.
TTS is applied via Pipeline middleware during streaming responses.
conv, _ := sdk.Open("./assistant.pack.json", "voice", sdk.WithTTS(tts.NewOpenAI(os.Getenv("OPENAI_API_KEY"))),)func WithTokenBudget
Section titled “func WithTokenBudget”func WithTokenBudget(tokens int) OptionWithTokenBudget sets the maximum tokens for context (prompt + history).
When the conversation history exceeds this budget, older messages are truncated according to the truncation strategy.
conv, _ := sdk.Open("./chat.pack.json", "assistant", sdk.WithTokenBudget(8000),)func WithToolHook
Section titled “func WithToolHook”func WithToolHook(h hooks.ToolHook) OptionWithToolHook registers a tool hook for intercepting tool execution.
Tool hooks run synchronously before and after each LLM-initiated tool call. Multiple hooks are executed in order; the first deny short-circuits.
conv, _ := sdk.Open("./chat.pack.json", "assistant", sdk.WithToolHook(myToolAuditHook),)func WithToolRegistry
Section titled “func WithToolRegistry”func WithToolRegistry(registry *tools.Registry) OptionWithToolRegistry provides a pre-configured tool registry.
This is a power-user option for scenarios requiring direct registry access. Tool descriptors are still loaded from the pack; this allows providing custom executors or middleware.
registry := tools.NewRegistry()registry.RegisterExecutor(&myCustomExecutor{})conv, _ := sdk.Open("./chat.pack.json", "assistant", sdk.WithToolRegistry(registry),)func WithTracerProvider
Section titled “func WithTracerProvider”func WithTracerProvider(tp trace.TracerProvider) OptionWithTracerProvider sets the OpenTelemetry TracerProvider for distributed tracing.
When set, the conversation emits OTel spans for pipeline, provider, tool, middleware, and workflow events. These spans nest under the provider’s trace tree, enabling end-to-end observability across services.
If not set, no spans are created (zero overhead).
tp := sdktrace.NewTracerProvider(...)conv, _ := sdk.Open("./chat.pack.json", "assistant", sdk.WithTracerProvider(tp),)func WithTruncation
Section titled “func WithTruncation”func WithTruncation(strategy string) OptionWithTruncation sets the truncation strategy for context management.
Strategies:
-
“sliding”: Remove oldest messages first (default)
-
“summarize”: Summarize old messages before removing
-
“relevance”: Remove least relevant messages based on embedding similarity
conv, _ := sdk.Open(”./chat.pack.json”, “assistant”, sdk.WithTokenBudget(8000), sdk.WithTruncation(“summarize”), )
func WithTurnDetector
Section titled “func WithTurnDetector”func WithTurnDetector(detector audio.TurnDetector) OptionWithTurnDetector configures turn detection for the Pipeline.
Turn detectors determine when a user has finished speaking in audio sessions.
conv, _ := sdk.Open("./assistant.pack.json", "voice", sdk.WithTurnDetector(audio.NewSilenceDetector(500 * time.Millisecond)),)func WithUserID
Section titled “func WithUserID”func WithUserID(id string) OptionWithUserID sets a pseudonymous user identifier for the session.
This is an operator-provided virtual identity used for cross-session correlation (e.g., memory, analytics). PromptKit does not manage user-to-session mapping — that is the operator’s concern.
conv, _ := sdk.Open("./chat.pack.json", "assistant", sdk.WithUserID("virtual-user-abc"),)func WithVADMode
Section titled “func WithVADMode”func WithVADMode(sttService stt.Service, ttsService tts.Service, cfg *VADModeConfig) OptionWithVADMode configures VAD mode for voice conversations with standard text-based LLMs. VAD mode processes audio through a pipeline: Audio → VAD → STT → LLM → TTS → Audio
This is an alternative to ASM mode (WithStreamingConfig) for providers without native audio streaming support.
Example:
sttService := stt.NewOpenAI(os.Getenv("OPENAI_API_KEY"))ttsService := tts.NewOpenAI(os.Getenv("OPENAI_API_KEY"))
conv, _ := sdk.OpenDuplex("./assistant.pack.json", "voice-chat", sdk.WithProvider(openai.NewProvider(openai.Config{...})), sdk.WithVADMode(sttService, ttsService, nil), // nil uses defaults)With custom config:
conv, _ := sdk.OpenDuplex("./assistant.pack.json", "voice-chat", sdk.WithProvider(openai.NewProvider(openai.Config{...})), sdk.WithVADMode(sttService, ttsService, &sdk.VADModeConfig{ SilenceDuration: 500 * time.Millisecond, Voice: "nova", }),)func WithVariableProvider
Section titled “func WithVariableProvider”func WithVariableProvider(p variables.Provider) OptionWithVariableProvider adds a variable provider for dynamic variable resolution.
Variables are resolved before each Send() and merged with static variables. Later providers in the chain override earlier ones with the same key.
conv, _ := sdk.Open("./assistant.pack.json", "support", sdk.WithVariableProvider(variables.Time()), sdk.WithVariableProvider(variables.State()),)func WithVariables
Section titled “func WithVariables”func WithVariables(vars map[string]string) OptionWithVariables sets initial variables for template substitution.
These variables are available immediately when the conversation opens, before any messages are sent. Use this for variables that must be set before the first LLM call (e.g., in streaming/ASM mode).
Variables set here override prompt defaults but can be further modified via conv.SetVar() for subsequent messages.
conv, _ := sdk.Open("./assistant.pack.json", "assistant", sdk.WithVariables(map[string]string{ "user_name": "Alice", "language": "en", }),)func WithVertex
Section titled “func WithVertex”func WithVertex(region, project, providerType, model string, opts ...PlatformOption) OptionWithVertex configures Google Cloud Vertex AI as the hosting platform. The providerType specifies the provider factory (e.g., “claude”, “gemini”) and model is the model identifier. This uses Application Default Credentials (Workload Identity, gcloud auth, etc.).
conv, _ := sdk.Open("./chat.pack.json", "assistant", sdk.WithVertex("us-central1", "my-project", "gemini", "gemini-2.0-flash"),)type PackError
Section titled “type PackError”PackError represents an error loading or parsing a pack file.
type PackError struct { // Path is the pack file path. Path string
// Cause is the underlying error. Cause error}func (*PackError) Error
Section titled “func (*PackError) Error”func (e *PackError) Error() stringError implements the error interface.
func (*PackError) Unwrap
Section titled “func (*PackError) Unwrap”func (e *PackError) Unwrap() errorUnwrap returns the underlying error.
type PackTemplate
Section titled “type PackTemplate”PackTemplate is a pre-loaded, immutable representation of a pack file.
Use PackTemplate when creating many conversations from the same pack to avoid redundant file I/O, JSON parsing, schema validation, prompt registry construction, and tool repository construction on each Open() call.
PackTemplate is safe for concurrent use. All cached artifacts are immutable after construction.
Usage:
tmpl, err := sdk.LoadTemplate("./assistant.pack.json")if err != nil { log.Fatal(err)}
// Create conversations efficiently — pack is loaded oncefor req := range requests { conv, err := tmpl.Open("chat", sdk.WithProvider(myProvider)) if err != nil { log.Printf("open failed: %v", err) continue } go handleConversation(conv, req)}type PackTemplate struct { // contains filtered or unexported fields}func LoadTemplate
Section titled “func LoadTemplate”func LoadTemplate(packPath string, opts ...Option) (*PackTemplate, error)LoadTemplate loads a pack file and pre-builds shared, immutable resources.
The returned PackTemplate caches:
- The parsed pack structure
- The prompt registry (prompt configs, fragments)
- The tool repository (tool descriptors)
These are shared across all conversations created from this template. Per-conversation resources (tool executors, state stores, sessions) are still created fresh for each conversation.
Options that affect pack loading can be passed:
- WithSkipSchemaValidation() to skip JSON schema validation
func (*PackTemplate) Open
Section titled “func (*PackTemplate) Open”func (t *PackTemplate) Open(promptName string, opts ...Option) (*Conversation, error)Open creates a new conversation from this template for the given prompt.
This is equivalent to sdk.Open but reuses pre-loaded pack resources, avoiding per-conversation file I/O and parsing overhead.
Per-conversation resources are still created fresh:
- Tool registry (with shared repository but per-conversation executors)
- State store and session
- Capabilities
- Event bus and hooks
func (*PackTemplate) OpenDuplex
Section titled “func (*PackTemplate) OpenDuplex”func (t *PackTemplate) OpenDuplex(promptName string, opts ...Option) (*Conversation, error)OpenDuplex creates a new duplex streaming conversation from this template.
This is equivalent to sdk.OpenDuplex but reuses pre-loaded pack resources.
func (*PackTemplate) Pack
Section titled “func (*PackTemplate) Pack”func (t *PackTemplate) Pack() *pack.PackPack returns the loaded pack for inspection. The returned pack must not be modified.
type PendingClientTool
Section titled “type PendingClientTool”PendingClientTool represents a client-mode tool call that was deferred because no OnClientTool handler was registered. The caller must supply a result via Conversation.SendToolResult or reject it via Conversation.RejectClientTool and then call Conversation.Resume.
type PendingClientTool struct { // CallID is the provider-assigned ID for this tool invocation. CallID string
// ToolName is the tool's name as defined in the pack. ToolName string
// Args contains the parsed arguments from the LLM. Args map[string]any
// ConsentMsg is the human-readable consent message from the pack's // client.consent.message field. Empty when no consent is configured. ConsentMsg string
// Categories are the semantic consent categories (e.g., ["location"]). Categories []string}type PendingTool
Section titled “type PendingTool”PendingTool represents a tool call that requires external approval.
type PendingTool struct { // Unique identifier for this pending call ID string
// Tool name Name string
// Arguments passed to the tool Arguments map[string]any
// Reason the tool requires approval Reason string
// Human-readable message about why approval is needed Message string}type PlatformOption
Section titled “type PlatformOption”PlatformOption configures a platform for a provider.
type PlatformOption interface { // contains filtered or unexported methods}func WithAWSProfile
Section titled “func WithAWSProfile”func WithAWSProfile(profile string) PlatformOptionWithAWSProfile configures a named AWS profile for Bedrock credentials. When set, the SDK loads credentials from the named profile in ~/.aws/credentials or ~/.aws/config.
conv, _ := sdk.Open("./chat.pack.json", "assistant", sdk.WithBedrock("us-west-2", "claude", "claude-sonnet-4-20250514", sdk.WithAWSProfile("bedrock-prod"), ),)func WithAWSRoleARN
Section titled “func WithAWSRoleARN”func WithAWSRoleARN(arn string) PlatformOptionWithAWSRoleARN configures AWS STS role assumption for Bedrock credentials. When set, the SDK uses NewAWSCredentialWithRole instead of the default credential chain.
conv, _ := sdk.Open("./chat.pack.json", "assistant", sdk.WithBedrock("us-west-2", "claude", "claude-sonnet-4-20250514", sdk.WithAWSRoleARN("arn:aws:iam::123456789012:role/BedrockAccess"), ),)func WithAzureClientSecret
Section titled “func WithAzureClientSecret”func WithAzureClientSecret(tenantID, clientID, secret string) PlatformOptionWithAzureClientSecret configures Azure service principal authentication using a client secret (tenant ID, client ID, and secret).
conv, _ := sdk.Open("./chat.pack.json", "assistant", sdk.WithAzure("https://my-resource.openai.azure.com", "openai", "gpt-4o", sdk.WithAzureClientSecret("tenant-id", "client-id", "client-secret"), ),)func WithAzureManagedIdentity
Section titled “func WithAzureManagedIdentity”func WithAzureManagedIdentity(clientID string) PlatformOptionWithAzureManagedIdentity configures Azure Managed Identity for authentication. The clientID is optional — pass an empty string to use system-assigned identity, or a client ID for user-assigned identity.
conv, _ := sdk.Open("./chat.pack.json", "assistant", sdk.WithAzure("https://my-resource.openai.azure.com", "openai", "gpt-4o", sdk.WithAzureManagedIdentity("client-id-for-user-assigned"), ),)func WithGCPServiceAccount
Section titled “func WithGCPServiceAccount”func WithGCPServiceAccount(keyPath string) PlatformOptionWithGCPServiceAccount configures a service account key file for Vertex AI credentials. When set, the SDK uses NewGCPCredentialWithServiceAccount instead of Application Default Credentials.
conv, _ := sdk.Open("./chat.pack.json", "assistant", sdk.WithVertex("us-central1", "my-project", "gemini", "gemini-2.0-flash", sdk.WithGCPServiceAccount("/path/to/service-account.json"), ),)func WithPlatformEndpoint
Section titled “func WithPlatformEndpoint”func WithPlatformEndpoint(endpoint string) PlatformOptionWithPlatformEndpoint sets a custom endpoint URL.
func WithPlatformProject
Section titled “func WithPlatformProject”func WithPlatformProject(project string) PlatformOptionWithPlatformProject sets the cloud project (for Vertex).
func WithPlatformRegion
Section titled “func WithPlatformRegion”func WithPlatformRegion(region string) PlatformOptionWithPlatformRegion sets the cloud region.
type ProviderError
Section titled “type ProviderError”ProviderError represents an error from the LLM provider.
type ProviderError struct { // Provider name (e.g., "openai", "anthropic"). Provider string
// StatusCode is the HTTP status code if available. StatusCode int
// Message is the error message from the provider. Message string
// Cause is the underlying error. Cause error}func (*ProviderError) Error
Section titled “func (*ProviderError) Error”func (e *ProviderError) Error() stringError implements the error interface.
func (*ProviderError) Unwrap
Section titled “func (*ProviderError) Unwrap”func (e *ProviderError) Unwrap() errorUnwrap returns the underlying error.
type RecordingConfig
Section titled “type RecordingConfig”RecordingConfig configures session recording via RecordingStage. RecordingStages capture full message content (including binary data) and publish directly to the EventBus for session replay.
type RecordingConfig struct { // IncludeAudio records audio data (may be large). Default: true. IncludeAudio bool
// IncludeVideo records video data (may be large). Default: false. IncludeVideo bool
// IncludeImages records image data. Default: true. IncludeImages bool}func DefaultRecordingConfig
Section titled “func DefaultRecordingConfig”func DefaultRecordingConfig() RecordingConfigDefaultRecordingConfig returns a RecordingConfig with sensible defaults.
type RelevanceConfig
Section titled “type RelevanceConfig”RelevanceConfig configures embedding-based relevance truncation. Used when truncation strategy is “relevance”.
type RelevanceConfig struct { // EmbeddingProvider generates embeddings for similarity scoring. // Required for relevance-based truncation. EmbeddingProvider providers.EmbeddingProvider
// MinRecentMessages always keeps the N most recent messages regardless of relevance. // Default: 3 MinRecentMessages int
// AlwaysKeepSystemRole keeps all system role messages regardless of score. // Default: true AlwaysKeepSystemRole bool
// SimilarityThreshold is the minimum score (0.0-1.0) to consider a message relevant. // Messages below this threshold are dropped first. Default: 0.0 (no threshold) SimilarityThreshold float64
// QuerySource determines what text to compare messages against. // Values: "last_user" (default), "last_n", "custom" QuerySource string
// LastNCount is the number of messages to use when QuerySource is "last_n". // Default: 3 LastNCount int
// CustomQuery is the query text when QuerySource is "custom". CustomQuery string}type Response
Section titled “type Response”Response represents the result of a conversation turn.
Response wraps the assistant’s message with convenience methods and additional metadata like timing and validation results.
Basic usage:
resp, _ := conv.Send(ctx, "Hello!")fmt.Println(resp.Text()) // Text contentfmt.Println(resp.TokensUsed()) // Total tokensfmt.Println(resp.Cost()) // Total cost in USDFor multimodal responses:
if resp.HasMedia() { for _, part := range resp.Parts() { if part.Media != nil { fmt.Printf("Media: %s\n", part.Media.URL) } }}type Response struct { // contains filtered or unexported fields}func NewResponseForTest
Section titled “func NewResponseForTest”func NewResponseForTest(text string, toolCalls []types.MessageToolCall, opts ...ResponseTestOption) *ResponseNewResponseForTest creates a Response for use in tests outside the sdk package. This is not intended for production use.
func (*Response) ClientTools
Section titled “func (*Response) ClientTools”func (r *Response) ClientTools() []PendingClientToolClientTools returns client tools awaiting fulfillment by the caller.
When no Conversation.OnClientTool handler is registered for a tool, the pipeline suspends and the pending client tools are returned here. The caller should fulfillthem via Conversation.SendToolResult or Conversation.RejectClientTool, then call Conversation.Resume.
func (*Response) Cost
Section titled “func (*Response) Cost”func (r *Response) Cost() float64Cost returns the total cost in USD for this response.
func (*Response) Duration
Section titled “func (*Response) Duration”func (r *Response) Duration() time.DurationDuration returns how long the request took.
func (*Response) HasMedia
Section titled “func (*Response) HasMedia”func (r *Response) HasMedia() boolHasMedia returns true if the response contains any media content.
func (*Response) HasPendingClientTools
Section titled “func (*Response) HasPendingClientTools”func (r *Response) HasPendingClientTools() boolHasPendingClientTools returns true if the response contains client tools that the caller must fulfillbefore the conversation can continue.
func (*Response) HasToolCalls
Section titled “func (*Response) HasToolCalls”func (r *Response) HasToolCalls() boolHasToolCalls returns true if the response contains tool calls.
func (*Response) InputTokens
Section titled “func (*Response) InputTokens”func (r *Response) InputTokens() intInputTokens returns the number of input (prompt) tokens used.
func (*Response) Message
Section titled “func (*Response) Message”func (r *Response) Message() *types.MessageMessage returns the underlying runtime Message.
Use this when you need direct access to the message structure, such as for serialization or passing to other runtime components.
func (*Response) OutputTokens
Section titled “func (*Response) OutputTokens”func (r *Response) OutputTokens() intOutputTokens returns the number of output (completion) tokens used.
func (*Response) Parts
Section titled “func (*Response) Parts”func (r *Response) Parts() []types.ContentPartParts returns all content parts in the response.
Use this for multimodal responses that may contain text, images, audio, or other content types.
func (*Response) PendingTools
Section titled “func (*Response) PendingTools”func (r *Response) PendingTools() []PendingToolPendingTools returns tools that are awaiting external approval.
This is used for Human-in-the-Loop (HITL) workflows where certain tools require approval before execution.
func (*Response) Text
Section titled “func (*Response) Text”func (r *Response) Text() stringText returns the text content of the response.
This is a convenience method that extracts all text parts and joins them. For responses with only text content, this returns the full response. For multimodal responses, use Response.Parts to access all content.
func (*Response) TokensUsed
Section titled “func (*Response) TokensUsed”func (r *Response) TokensUsed() intTokensUsed returns the total number of tokens used (input + output).
func (*Response) ToolCalls
Section titled “func (*Response) ToolCalls”func (r *Response) ToolCalls() []types.MessageToolCallToolCalls returns the tool calls made during this turn.
Tool calls are requests from the LLM to execute functions. If you have registered handlers via Conversation.OnTool, they will be executed automatically and the results sent back to the LLM.
func (*Response) Validations
Section titled “func (*Response) Validations”func (r *Response) Validations() []types.ValidationResultValidations returns the results of all validators that ran.
Validators are defined in the pack and run automatically on responses. Check this to see which validators passed or failed.
type ResponseTestOption
Section titled “type ResponseTestOption”ResponseTestOption configures a Response created by NewResponseForTest.
type ResponseTestOption func(*Response)func WithClientToolsForTest
Section titled “func WithClientToolsForTest”func WithClientToolsForTest(tools []PendingClientTool) ResponseTestOptionWithClientToolsForTest attaches pending client tools to a test response.
type SendOption
Section titled “type SendOption”SendOption configures a single Send call.
type SendOption func(*sendConfig) errorfunc WithAudioData
Section titled “func WithAudioData”func WithAudioData(data []byte, mimeType string) SendOptionWithAudioData attaches audio from raw bytes.
resp, _ := conv.Send(ctx, "Transcribe this audio", sdk.WithAudioData(audioBytes, "audio/mp3"),)func WithAudioFile
Section titled “func WithAudioFile”func WithAudioFile(path string) SendOptionWithAudioFile attaches audio from a file path.
resp, _ := conv.Send(ctx, "Transcribe this audio", sdk.WithAudioFile("/path/to/audio.mp3"),)func WithDocumentData
Section titled “func WithDocumentData”func WithDocumentData(data []byte, mimeType string) SendOptionWithDocumentData attaches a document from raw data with the specified MIME type.
resp, _ := conv.Send(ctx, "Review this PDF", sdk.WithDocumentData(pdfBytes, types.MIMETypePDF),)func WithDocumentFile
Section titled “func WithDocumentFile”func WithDocumentFile(path string) SendOptionWithDocumentFile attaches a document from a file path (PDF, Word, markdown, etc.).
resp, _ := conv.Send(ctx, "Analyze this document", sdk.WithDocumentFile("contract.pdf"),)func WithFile
Section titled “func WithFile”func WithFile(name string, data []byte) SendOptionWithFile attaches a file with the given name and content.
Deprecated: Use WithDocumentFile or WithDocumentData instead for proper document handling. This function is kept for backward compatibility but should not be used for new code as it cannot properly handle binary files.
resp, _ := conv.Send(ctx, "Analyze this data", sdk.WithFile("data.csv", csvBytes),)func WithImageData
Section titled “func WithImageData”func WithImageData(data []byte, mimeType string, detail ...*string) SendOptionWithImageData attaches an image from raw bytes.
resp, _ := conv.Send(ctx, "What's in this image?", sdk.WithImageData(imageBytes, "image/png"),)func WithImageFile
Section titled “func WithImageFile”func WithImageFile(path string, detail ...*string) SendOptionWithImageFile attaches an image from a file path.
resp, _ := conv.Send(ctx, "What's in this image?", sdk.WithImageFile("/path/to/image.jpg"),)func WithImageURL
Section titled “func WithImageURL”func WithImageURL(url string, detail ...*string) SendOptionWithImageURL attaches an image from a URL.
resp, _ := conv.Send(ctx, "What's in this image?", sdk.WithImageURL("https://example.com/photo.jpg"),)func WithVideoData
Section titled “func WithVideoData”func WithVideoData(data []byte, mimeType string) SendOptionWithVideoData attaches a video from raw bytes.
resp, _ := conv.Send(ctx, "Describe this video", sdk.WithVideoData(videoBytes, "video/mp4"),)func WithVideoFile
Section titled “func WithVideoFile”func WithVideoFile(path string) SendOptionWithVideoFile attaches a video from a file path.
resp, _ := conv.Send(ctx, "Describe this video", sdk.WithVideoFile("/path/to/video.mp4"),)type SessionMode
Section titled “type SessionMode”SessionMode represents the conversation’s session mode.
type SessionMode intconst ( // UnaryMode for request/response conversations. UnaryMode SessionMode = iota // DuplexMode for bidirectional streaming conversations. DuplexMode)type ShutdownManager
Section titled “type ShutdownManager”ShutdownManager tracks active conversations and closes them all on shutdown. It is safe for concurrent use.
Use NewShutdownManager to create an instance and WithShutdownManager to wire it into Open / OpenDuplex so that conversations are automatically registered and deregistered.
Example:
mgr := sdk.NewShutdownManager()go sdk.GracefulShutdown(mgr, 30*time.Second)
conv, _ := sdk.Open("./chat.pack.json", "assistant", sdk.WithShutdownManager(mgr),)defer conv.Close() // automatically deregisterstype ShutdownManager struct { // contains filtered or unexported fields}func NewShutdownManager
Section titled “func NewShutdownManager”func NewShutdownManager() *ShutdownManagerNewShutdownManager creates a new ShutdownManager.
func (*ShutdownManager) Deregister
Section titled “func (*ShutdownManager) Deregister”func (m *ShutdownManager) Deregister(id string)Deregister removes a conversation from tracking. It is safe to call with an ID that was never registered or was already deregistered.
func (*ShutdownManager) Len
Section titled “func (*ShutdownManager) Len”func (m *ShutdownManager) Len() intLen returns the number of currently tracked conversations.
func (*ShutdownManager) Register
Section titled “func (*ShutdownManager) Register”func (m *ShutdownManager) Register(id string, conv io.Closer) errorRegister tracks a conversation for shutdown. If the manager has already been shut down, it returns ErrShutdownManagerClosed.
func (*ShutdownManager) Shutdown
Section titled “func (*ShutdownManager) Shutdown”func (m *ShutdownManager) Shutdown(ctx context.Context) errorShutdown closes all tracked conversations. It respects the context deadline, returning a context error if the deadline is exceeded before all conversations are closed. After Shutdown returns, new registrations are rejected.
Concurrency is bounded by [maxConcurrentClosures] to avoid spawning an unbounded number of goroutines.
Errors from individual Close calls are collected and returned as a combined error using errors.Join.
type SkillsCapability
Section titled “type SkillsCapability”SkillsCapability provides skill activation/deactivation tools to conversations. Skills are loaded from directories or inline definitions and can be dynamically activated by the LLM via the skill__activate tool.
type SkillsCapability struct { // contains filtered or unexported fields}func NewSkillsCapability
Section titled “func NewSkillsCapability”func NewSkillsCapability(sources []skills.SkillSource, opts ...SkillsOption) *SkillsCapabilityNewSkillsCapability creates a new SkillsCapability from the given sources.
func (*SkillsCapability) Close
Section titled “func (*SkillsCapability) Close”func (c *SkillsCapability) Close() errorClose is a no-op for SkillsCapability.
func (*SkillsCapability) Executor
Section titled “func (*SkillsCapability) Executor”func (c *SkillsCapability) Executor() *skills.ExecutorExecutor returns the underlying skills executor for testing.
func (*SkillsCapability) Init
Section titled “func (*SkillsCapability) Init”func (c *SkillsCapability) Init(ctx CapabilityContext) errorInit discovers skills from sources and creates an executor.
func (*SkillsCapability) Name
Section titled “func (*SkillsCapability) Name”func (c *SkillsCapability) Name() stringName returns the capability identifier.
func (*SkillsCapability) RegisterTools
Section titled “func (*SkillsCapability) RegisterTools”func (c *SkillsCapability) RegisterTools(registry *tools.Registry)RegisterTools registers the skill management tools into the registry.
type SkillsOption
Section titled “type SkillsOption”SkillsOption configures a SkillsCapability.
type SkillsOption func(*SkillsCapability)func WithMaxActiveSkills
Section titled “func WithMaxActiveSkills”func WithMaxActiveSkills(n int) SkillsOptionWithMaxActiveSkills sets the maximum number of concurrently active skills.
func WithSkillSelector
Section titled “func WithSkillSelector”func WithSkillSelector(s skills.SkillSelector) SkillsOptionWithSkillSelector sets a custom skill selector for filtering available skills.
type StatefulCapability
Section titled “type StatefulCapability”StatefulCapability can update tools dynamically (e.g., after workflow state changes).
type StatefulCapability interface { Capability RefreshTools(registry *tools.Registry)}type StaticEndpointResolver
Section titled “type StaticEndpointResolver”StaticEndpointResolver returns the same base URL for every agent. This is useful when all agents are behind a single gateway or when developing locally against a single A2A server.
type StaticEndpointResolver struct { BaseURL string}func (*StaticEndpointResolver) Resolve
Section titled “func (*StaticEndpointResolver) Resolve”func (r *StaticEndpointResolver) Resolve(_ string) stringResolve returns the static base URL for any agent name.
type StreamChunk
Section titled “type StreamChunk”StreamChunk represents a single chunk in a streaming response.
type StreamChunk struct { // Type of this chunk Type ChunkType
// Text content (for ChunkText type) Text string
// Tool call (for ChunkToolCall type) ToolCall *types.MessageToolCall
// Media content (for ChunkMedia type) Media *types.MediaContent
// ClientTool contains a pending client tool request (for ChunkClientTool type). // The caller should fulfill it via SendToolResult/RejectClientTool, then call ResumeStream. ClientTool *PendingClientTool
// Complete response (for ChunkDone type) Message *Response
// Error (if any occurred) Error error}type StreamDoneEvent
Section titled “type StreamDoneEvent”StreamDoneEvent is emitted when the stream completes.
type StreamDoneEvent struct { // Response contains the complete response with metadata. Response *Response}type StreamEvent
Section titled “type StreamEvent”StreamEvent is the interface for all stream event types. Use a type switch to handle specific events.
type StreamEvent interface { // contains filtered or unexported methods}type StreamEventHandler
Section titled “type StreamEventHandler”StreamEventHandler is called for each event during streaming.
type StreamEventHandler func(event StreamEvent)type TextDeltaEvent
Section titled “type TextDeltaEvent”TextDeltaEvent is emitted for each text delta during streaming.
type TextDeltaEvent struct { // Delta is the new text content in this chunk. Delta string}type ToolError
Section titled “type ToolError”ToolError represents an error executing a tool.
type ToolError struct { // ToolName is the name of the tool that failed. ToolName string
// Cause is the underlying error from the tool handler. Cause error}func (*ToolError) Error
Section titled “func (*ToolError) Error”func (e *ToolError) Error() stringError implements the error interface.
func (*ToolError) Unwrap
Section titled “func (*ToolError) Unwrap”func (e *ToolError) Unwrap() errorUnwrap returns the underlying error.
type ToolHandler
Section titled “type ToolHandler”ToolHandler is a function that executes a tool call. It receives the parsed arguments from the LLM and returns a result.
The args map contains the arguments as specified in the tool’s schema. The return value should be JSON-serializable.
conv.OnTool("get_weather", func(args map[string]any) (any, error) { city := args["city"].(string) return weatherAPI.GetCurrent(city)})type ToolHandler func(args map[string]any) (any, error)type ToolHandlerCtx
Section titled “type ToolHandlerCtx”ToolHandlerCtx is like ToolHandler but receives a context. Use this when your tool implementation needs context for cancellation or deadlines.
conv.OnToolCtx("search_db", func(ctx context.Context, args map[string]any) (any, error) { return db.SearchWithContext(ctx, args["query"].(string))})type ToolHandlerCtx func(ctx context.Context, args map[string]any) (any, error)type VADModeConfig
Section titled “type VADModeConfig”VADModeConfig configures VAD (Voice Activity Detection) mode for voice conversations. In VAD mode, the pipeline processes audio through: AudioTurnStage → STTStage → ProviderStage → TTSStage
This enables voice conversations using standard text-based LLMs.
type VADModeConfig struct { // SilenceDuration is how long silence must persist to trigger turn complete. // Default: 800ms SilenceDuration time.Duration
// MinSpeechDuration is minimum speech before turn can complete. // Default: 200ms MinSpeechDuration time.Duration
// MaxTurnDuration is maximum turn length before forcing completion. // Default: 30s MaxTurnDuration time.Duration
// SampleRate is the audio sample rate. // Default: 16000 SampleRate int
// Language is the language hint for STT (e.g., "en", "es"). // Default: "en" Language string
// Voice is the TTS voice to use. // Default: "alloy" Voice string
// Speed is the TTS speech rate (0.5-2.0). // Default: 1.0 Speed float64}func DefaultVADModeConfig
Section titled “func DefaultVADModeConfig”func DefaultVADModeConfig() *VADModeConfigDefaultVADModeConfig returns sensible defaults for VAD mode.
type ValidateEvalTypesOpts
Section titled “type ValidateEvalTypesOpts”ValidateEvalTypesOpts configures eval type validation.
type ValidateEvalTypesOpts struct {
// PackPath loads a PromptPack from the filesystem. PackPath string
// PackData parses a PromptPack from raw JSON bytes. PackData []byte
// EvalDefs provides pre-resolved eval definitions directly. EvalDefs []evals.EvalDef
// PromptName selects which prompt's evals to merge with pack-level evals. // Only used with PackPath or PackData. If empty, only pack-level evals are checked. PromptName string
// RuntimeConfigPath registers exec eval handlers from a RuntimeConfig YAML file // before validation, so custom eval types are recognized. RuntimeConfigPath string
// Registry overrides the default eval type registry. // If nil, a registry with all built-in handlers is created. Registry *evals.EvalTypeRegistry
// SkipSchemaValidation disables JSON schema validation when loading from PackPath. SkipSchemaValidation bool}type ValidationError
Section titled “type ValidationError”ValidationError represents a validation failure.
type ValidationError struct { // ValidatorType is the type of validator that failed (e.g., "banned_words"). ValidatorType string
// Message describes what validation rule was violated. Message string
// Details contains validator-specific information about the failure. Details map[string]any}func AsValidationError
Section titled “func AsValidationError”func AsValidationError(err error) (*ValidationError, bool)AsValidationError checks if an error is a ValidationError and returns it.
resp, err := conv.Send(ctx, message)if err != nil { if vErr, ok := sdk.AsValidationError(err); ok { fmt.Printf("Validation failed: %s\n", vErr.ValidatorType) }}func (*ValidationError) Error
Section titled “func (*ValidationError) Error”func (e *ValidationError) Error() stringError implements the error interface.
type VideoStreamConfig
Section titled “type VideoStreamConfig”VideoStreamConfig configures realtime video/image streaming for duplex sessions. This enables webcam feeds, screen sharing, and continuous frame analysis.
type VideoStreamConfig struct { // TargetFPS is the target frame rate for streaming. // Frames exceeding this rate will be dropped. // Default: 1.0 (one frame per second, suitable for most LLM vision scenarios) TargetFPS float64
// MaxWidth is the maximum frame width in pixels. // Frames larger than this are resized. 0 means no limit. // Default: 0 (no resizing) MaxWidth int
// MaxHeight is the maximum frame height in pixels. // Frames larger than this are resized. 0 means no limit. // Default: 0 (no resizing) MaxHeight int
// Quality is the JPEG compression quality (1-100) for frame encoding. // Higher values = better quality, larger size. // Default: 85 Quality int
// EnableResize enables automatic frame resizing when dimensions exceed limits. // Default: true (resizing enabled when MaxWidth/MaxHeight are set) EnableResize bool}func DefaultVideoStreamConfig
Section titled “func DefaultVideoStreamConfig”func DefaultVideoStreamConfig() *VideoStreamConfigDefaultVideoStreamConfig returns sensible defaults for video streaming.
type WorkflowCapability
Section titled “type WorkflowCapability”WorkflowCapability provides the workflow__transition tool for LLM-initiated state transitions.
type WorkflowCapability struct { // contains filtered or unexported fields}func NewWorkflowCapability
Section titled “func NewWorkflowCapability”func NewWorkflowCapability() *WorkflowCapabilityNewWorkflowCapability creates a new WorkflowCapability.
func (*WorkflowCapability) Close
Section titled “func (*WorkflowCapability) Close”func (w *WorkflowCapability) Close() errorClose is a no-op for WorkflowCapability.
func (*WorkflowCapability) Init
Section titled “func (*WorkflowCapability) Init”func (w *WorkflowCapability) Init(ctx CapabilityContext) errorInit stores the workflow spec from the pack for later tool registration.
func (*WorkflowCapability) Name
Section titled “func (*WorkflowCapability) Name”func (w *WorkflowCapability) Name() stringName returns the capability identifier.
func (*WorkflowCapability) RegisterTools
Section titled “func (*WorkflowCapability) RegisterTools”func (w *WorkflowCapability) RegisterTools(_ *tools.Registry)RegisterTools is a no-op at conversation level. WorkflowConversation calls RegisterToolsForState per-state.
func (*WorkflowCapability) RegisterToolsForState
Section titled “func (*WorkflowCapability) RegisterToolsForState”func (w *WorkflowCapability) RegisterToolsForState(registry *tools.Registry, state *workflow.State)RegisterToolsForState registers workflow__transition for a specific state. Called by WorkflowConversation when opening a conversation for a state.
type WorkflowConversation
Section titled “type WorkflowConversation”WorkflowConversation manages a stateful workflow that transitions between different prompts in a pack based on events.
Each state in the workflow maps to a prompt_task in the pack. When a transition occurs, the current conversation is closed and a new one is opened for the target state’s prompt.
Basic usage:
wc, err := sdk.OpenWorkflow("./support.pack.json")if err != nil { log.Fatal(err)}defer wc.Close()
resp, _ := wc.Send(ctx, "I need help with billing")fmt.Println(resp.Text())
newState, _ := wc.Transition("Escalate")fmt.Println("Moved to:", newState)type WorkflowConversation struct { // contains filtered or unexported fields}func OpenWorkflow
Section titled “func OpenWorkflow”func OpenWorkflow(packPath string, opts ...Option) (*WorkflowConversation, error)OpenWorkflow loads a pack file and creates a WorkflowConversation.
The pack must contain a workflow section. The initial conversation is opened for the workflow’s entry state prompt_task.
wc, err := sdk.OpenWorkflow("./support.pack.json", sdk.WithModel("gpt-4o"),)func ResumeWorkflow
Section titled “func ResumeWorkflow”func ResumeWorkflow(workflowID, packPath string, opts ...Option) (*WorkflowConversation, error)ResumeWorkflow restores a WorkflowConversation from a previously persisted state.
The workflow context is loaded from the state store’s metadata[“workflow”] key. A state store must be configured via WithStateStore.
wc, err := sdk.ResumeWorkflow("workflow-123", "./support.pack.json", sdk.WithStateStore(store),)func (*WorkflowConversation) ActiveConversation
Section titled “func (*WorkflowConversation) ActiveConversation”func (wc *WorkflowConversation) ActiveConversation() *ConversationActiveConversation returns the current state’s Conversation. Use this to access conversation-specific methods like SetVar, OnTool, etc.
func (*WorkflowConversation) AvailableEvents
Section titled “func (*WorkflowConversation) AvailableEvents”func (wc *WorkflowConversation) AvailableEvents() []stringAvailableEvents returns the events available in the current state, sorted alphabetically.
func (*WorkflowConversation) Close
Section titled “func (*WorkflowConversation) Close”func (wc *WorkflowConversation) Close() errorClose closes the active conversation and marks the workflow as closed.
func (*WorkflowConversation) Context
Section titled “func (*WorkflowConversation) Context”func (wc *WorkflowConversation) Context() *workflow.ContextContext returns a snapshot of the workflow execution context including transition history and metadata.
func (*WorkflowConversation) CurrentPromptTask
Section titled “func (*WorkflowConversation) CurrentPromptTask”func (wc *WorkflowConversation) CurrentPromptTask() stringCurrentPromptTask returns the prompt_task for the current state.
func (*WorkflowConversation) CurrentState
Section titled “func (*WorkflowConversation) CurrentState”func (wc *WorkflowConversation) CurrentState() stringCurrentState returns the current workflow state name.
func (*WorkflowConversation) IsComplete
Section titled “func (*WorkflowConversation) IsComplete”func (wc *WorkflowConversation) IsComplete() boolIsComplete returns true if the workflow is in a terminal state (no outgoing transitions).
func (*WorkflowConversation) OrchestrationMode
Section titled “func (*WorkflowConversation) OrchestrationMode”func (wc *WorkflowConversation) OrchestrationMode() workflow.OrchestrationOrchestrationMode returns the orchestration mode of the current state. External orchestration means transitions are driven by outside callers (e.g., HTTP handlers, message queues) rather than from within the conversation loop.
func (*WorkflowConversation) Send
Section titled “func (*WorkflowConversation) Send”func (wc *WorkflowConversation) Send(ctx context.Context, message any, opts ...SendOption) (*Response, error)Send sends a message to the active state’s conversation and returns the response. If the LLM calls the workflow__transition tool, the transition is processed after the Send completes.
resp, err := wc.Send(ctx, "Hello!")fmt.Println(resp.Text())func (*WorkflowConversation) Transition
Section titled “func (*WorkflowConversation) Transition”func (wc *WorkflowConversation) Transition(event string) (string, error)Transition processes an event and moves to the next state.
The current conversation is closed and a new one is opened for the target state’s prompt_task. Returns the new state name.
newState, err := wc.Transition("Escalate")if errors.Is(err, workflow.ErrInvalidEvent) { fmt.Println("Available events:", wc.AvailableEvents())}Generated by gomarkdoc