Tutorial 4: Validation & Guardrails
Add content safety and validation to your LLM application.
Time: 20 minutes Level: Intermediate
What You’ll Build
Section titled “What You’ll Build”A chatbot with content filtering and validation guardrails using the hooks system.
What You’ll Learn
Section titled “What You’ll Learn”- Register guardrail hooks via the SDK
- Filter banned words and enforce length limits
- Understand enforcement behavior (content replacement/truncation)
- Use monitor-only mode for observability without enforcement
- Create custom
ProviderHookimplementations - Use streaming guardrails via
ChunkInterceptor - Configure guardrails in pack YAML
Prerequisites
Section titled “Prerequisites”- Completed Tutorial 1
Step 1: Basic Guardrails
Section titled “Step 1: Basic Guardrails”Add banned-word filtering and length limits using the SDK:
package main
import ( "context" "bufio" "fmt" "log" "os" "strings"
"github.com/AltairaLabs/PromptKit/sdk" "github.com/AltairaLabs/PromptKit/runtime/hooks/guardrails")
func main() { // Open a conversation with guardrail hooks conv, err := sdk.Open("./app.pack.json", "chat", sdk.WithProviderHook(guardrails.NewBannedWordsHook([]string{ "spam", "hack", "exploit", })), sdk.WithProviderHook(guardrails.NewLengthHook(2000, 500)), sdk.WithProviderHook(guardrails.NewMaxSentencesHook(10)), ) if err != nil { log.Fatal(err) } defer conv.Close()
ctx := context.Background() scanner := bufio.NewScanner(os.Stdin)
fmt.Println("Safe Chatbot (with content filtering)") fmt.Print("\nYou: ")
for scanner.Scan() { input := strings.TrimSpace(scanner.Text())
if input == "exit" { break } if input == "" { fmt.Print("You: ") continue }
resp, err := conv.Send(ctx, input) if err != nil { log.Printf("\nError: %v\n\n", err) fmt.Print("You: ") continue }
// Guardrails enforce in-place — check for violations if len(resp.Validations) > 0 { fmt.Printf("\nBot: %s (guardrail enforced)\n\n", resp.Text()) } else { fmt.Printf("\nBot: %s\n\n", resp.Text()) } fmt.Print("You: ") }
fmt.Println("Goodbye!")}Step 2: Test Guardrails
Section titled “Step 2: Test Guardrails”Try these inputs:
You: Hello!Bot: Hi! How can I help you?
You: How do I hack a system?Bot: Sorry, we can't provide this response as it would violate our content policy. (guardrail enforced)
You: Tell me about artificial intelligenceBot: Artificial intelligence is...When a banned word is detected, the guardrail replaces the content with a safe message and the pipeline continues. The original response is never returned to the user.
Built-in Guardrail Hooks
Section titled “Built-in Guardrail Hooks”BannedWordsHook
Section titled “BannedWordsHook”Blocks messages containing banned words (case-insensitive, word-boundary matching):
hook := guardrails.NewBannedWordsHook([]string{ "spam", "hack", "exploit", "inappropriate",})Streaming: Yes — aborts the stream immediately on detection.
LengthHook
Section titled “LengthHook”Enforces character and/or token limits (pass 0 to disable a limit):
hook := guardrails.NewLengthHook(2000, 500) // maxCharacters, maxTokensStreaming: Yes — aborts when the limit is exceeded.
MaxSentencesHook
Section titled “MaxSentencesHook”Limits the number of sentences in a response:
hook := guardrails.NewMaxSentencesHook(5)Streaming: No — requires the complete response.
RequiredFieldsHook
Section titled “RequiredFieldsHook”Ensures the response contains required strings:
hook := guardrails.NewRequiredFieldsHook([]string{"order number", "tracking number"})Streaming: No — requires the complete response.
Custom Hooks
Section titled “Custom Hooks”Create domain-specific hooks by implementing ProviderHook:
package main
import ( "context" "regexp"
"github.com/AltairaLabs/PromptKit/runtime/hooks")
// PIIHook blocks responses containing personally identifiable information.type PIIHook struct { emailRegex *regexp.Regexp phoneRegex *regexp.Regexp}
func NewPIIHook() *PIIHook { return &PIIHook{ emailRegex: regexp.MustCompile(`\b[A-Za-z0-9._%+-]+@[A-Za-z0-9.-]+\.[A-Z|a-z]{2,}\b`), phoneRegex: regexp.MustCompile(`\b\d{3}[-.]?\d{3}[-.]?\d{4}\b`), }}
func (h *PIIHook) Name() string { return "pii_filter" }
func (h *PIIHook) BeforeCall(ctx context.Context, req *hooks.ProviderRequest) hooks.Decision { return hooks.Allow}
func (h *PIIHook) AfterCall(ctx context.Context, req *hooks.ProviderRequest, resp *hooks.ProviderResponse) hooks.Decision { content := resp.Message.Content() if h.emailRegex.MatchString(content) { return hooks.Deny("response contains email address") } if h.phoneRegex.MatchString(content) { return hooks.Deny("response contains phone number") } return hooks.Allow}Use it alongside built-in hooks:
conv, _ := sdk.Open("./app.pack.json", "chat", sdk.WithProviderHook(guardrails.NewBannedWordsHook([]string{"spam", "hack"})), sdk.WithProviderHook(guardrails.NewLengthHook(2000, 500)), sdk.WithProviderHook(NewPIIHook()),)Production Example
Section titled “Production Example”Combine multiple hooks with proper error handling:
package main
import ( "context" "bufio" "fmt" "log" "os" "strings"
"github.com/AltairaLabs/PromptKit/sdk" "github.com/AltairaLabs/PromptKit/runtime/hooks/guardrails")
func main() { conv, err := sdk.Open("./app.pack.json", "chat", // Streaming guardrails (can abort mid-stream) sdk.WithProviderHook(guardrails.NewBannedWordsHook([]string{ "spam", "scam", "hack", "exploit", })), sdk.WithProviderHook(guardrails.NewLengthHook(2000, 500)), // Post-completion guardrails sdk.WithProviderHook(guardrails.NewMaxSentencesHook(10)), sdk.WithProviderHook(guardrails.NewRequiredFieldsHook([]string{})), // Custom hook sdk.WithProviderHook(NewPIIHook()), ) if err != nil { log.Fatal(err) } defer conv.Close()
ctx := context.Background() scanner := bufio.NewScanner(os.Stdin)
fmt.Println("=== Secure Chatbot ===") fmt.Println("Content filtering enabled") fmt.Print("\nYou: ")
for scanner.Scan() { input := strings.TrimSpace(scanner.Text()) if input == "exit" { break } if input == "" { fmt.Print("You: ") continue }
resp, err := conv.Send(ctx, input) if err != nil { fmt.Printf("\n❌ Error: %v\n\n", err) fmt.Print("You: ") continue }
// Guardrails enforce in-place (truncate or replace content) // Check validations for details if len(resp.Validations) > 0 { for _, v := range resp.Validations { if !v.Passed { fmt.Printf("⚠️ Guardrail %s enforced\n", v.ValidatorType) } } }
fmt.Printf("\nBot: %s\n\n", resp.Text()) fmt.Print("You: ") }
fmt.Println("Goodbye!")}Streaming Guardrails
Section titled “Streaming Guardrails”Hooks that implement ChunkInterceptor can inspect each streaming chunk. When a guardrail triggers during streaming, the stream is stopped and the enforced content is returned:
// Stream with guardrailsch := conv.Stream(ctx, "Tell me about security")for chunk := range ch { if chunk.Error != nil { log.Printf("Stream error: %v\n", chunk.Error) break } if chunk.Type == sdk.ChunkText { fmt.Print(chunk.Text) }}BannedWordsHook and LengthHook both support streaming — they enforce immediately when a violation is detected (truncating content for length, stopping the stream for banned words), saving API costs on wasted tokens.
Pack YAML Approach
Section titled “Pack YAML Approach”You can also define validators in your pack’s prompt config. They are automatically converted to guardrail hooks at runtime:
# prompts/chat.yamlspec: system_template: | You are a helpful assistant.
validators: - type: banned_words params: words: - hack - exploit message: "This response has been blocked by our content policy."
- type: max_length params: max_characters: 2000 max_tokens: 500
- type: max_sentences params: max_sentences: 10
# Monitor-only: evaluate but don't modify content - type: banned_words params: words: - competitor_name monitor: trueThe message field sets a custom user-facing message when content is blocked. The monitor field enables monitor-only mode — the guardrail evaluates and records results but doesn’t modify content.
Common Issues
Section titled “Common Issues”Guardrail too strict
Section titled “Guardrail too strict”Problem: Legitimate messages blocked.
Solution: Refine banned words list, adjust length limits, review hook denial reasons via HookDeniedError.Reason.
Guardrail too permissive
Section titled “Guardrail too permissive”Problem: Inappropriate content getting through.
Solution: Add more hooks, use stricter patterns, consider a custom ProviderHook for domain-specific checks.
What You’ve Learned
Section titled “What You’ve Learned”- Register guardrail hooks via
sdk.WithProviderHook - Use built-in guardrails: banned words, length, sentences, required fields
- Guardrails enforce in-place (truncation/replacement) and the pipeline continues
- Use
monitor: truefor evaluation without enforcement - Use
messagefor custom blocked content messages - Create custom
ProviderHookimplementations - Use streaming guardrails for early abort
- Configure guardrails in pack YAML
Next Steps
Section titled “Next Steps”Continue to Tutorial 5: Production Deployment for production-ready patterns.
See Also
Section titled “See Also”- Checks Reference — All check types and parameters
- Unified Check Model — How guardrails, assertions, and evals relate
- Guardrails Reference — Guardrail configuration and behavior
- Hooks & Guardrails Reference — Runtime hook system API
- Handle Errors — Error strategies