Skip to content

Tutorial 1: First Pipeline

Learn the basics by building a simple LLM application.

Time: 15 minutes
Level: Beginner

A command-line application that sends prompts to an LLM and displays responses.

  • Create a pipeline
  • Configure an LLM provider
  • Execute requests
  • Handle responses
  • Track costs

Create a new Go module:

Terminal window
mkdir my-llm-app
cd my-llm-app
go mod init my-llm-app

Install PromptKit:

Terminal window
go get github.com/AltairaLabs/PromptKit/runtime@latest

Export your OpenAI API key:

Terminal window
export OPENAI_API_KEY="sk-..."

Create main.go:

package main
import (
"context"
"fmt"
"log"
"os"
"github.com/AltairaLabs/PromptKit/runtime/pipeline"
"github.com/AltairaLabs/PromptKit/runtime/pipeline/middleware"
"github.com/AltairaLabs/PromptKit/runtime/providers/openai"
)
func main() {
// Step 1: Create provider
provider := openai.NewOpenAIProvider(
"openai",
"gpt-4o-mini",
os.Getenv("OPENAI_API_KEY"),
openai.DefaultProviderDefaults(),
false,
)
defer provider.Close()
// Step 2: Build pipeline
pipe := pipeline.NewPipeline(
middleware.ProviderMiddleware(provider, nil, nil, &middleware.ProviderMiddlewareConfig{
MaxTokens: 500,
Temperature: 0.7,
}),
)
defer pipe.Shutdown(context.Background())
// Step 3: Execute request
ctx := context.Background()
result, err := pipe.Execute(ctx, "user", "What is artificial intelligence?")
if err != nil {
log.Fatal(err)
}
// Step 4: Display response
fmt.Printf("Response: %s\n", result.Response.Content)
fmt.Printf("Tokens: %d\n", result.Response.Usage.TotalTokens)
fmt.Printf("Cost: $%.6f\n", result.Cost.TotalCost)
}
Terminal window
go run main.go

You should see output like:

Response: Artificial intelligence (AI) refers to the simulation of human intelligence...
Tokens: 152
Cost: $0.000023
provider := openai.NewOpenAIProvider(
"openai", // Provider name
"gpt-4o-mini", // Model (cost-effective)
os.Getenv("OPENAI_API_KEY"), // API key
openai.DefaultProviderDefaults(), // Default settings
false, // Debug mode off
)

The provider connects to OpenAI’s API. We use gpt-4o-mini for cost-effectiveness.

pipe := pipeline.NewPipeline(
middleware.ProviderMiddleware(provider, nil, nil, config),
)

The pipeline processes requests through middleware. ProviderMiddleware sends requests to the LLM.

result, err := pipe.Execute(ctx, "user", "What is artificial intelligence?")

Execute() takes:

  • Context for cancellation
  • Role ("user" for user messages)
  • Content (your prompt)
fmt.Printf("Response: %s\n", result.Response.Content)
fmt.Printf("Tokens: %d\n", result.Response.Usage.TotalTokens)
fmt.Printf("Cost: $%.6f\n", result.Cost.TotalCost)

The result contains:

  • Response.Content: LLM’s response text
  • Response.Usage: Token counts
  • Cost.TotalCost: Cost in dollars

Try modifying your application:

Use a more powerful model:

provider := openai.NewOpenAIProvider(
"openai",
"gpt-4o", // More capable, higher cost
os.Getenv("OPENAI_API_KEY"),
openai.DefaultProviderDefaults(),
false,
)

Make responses more creative:

config := &middleware.ProviderMiddlewareConfig{
MaxTokens: 500,
Temperature: 1.0, // More creative (0.0 = deterministic, 2.0 = very creative)
}

Reduce costs by limiting tokens:

config := &middleware.ProviderMiddlewareConfig{
MaxTokens: 100, // Shorter responses
Temperature: 0.7,
}

Ask several questions:

questions := []string{
"What is AI?",
"What is machine learning?",
"What is deep learning?",
}
for _, question := range questions {
result, err := pipe.Execute(ctx, "user", question)
if err != nil {
log.Printf("Error: %v\n", err)
continue
}
fmt.Printf("\nQ: %s\n", question)
fmt.Printf("A: %s\n", result.Response.Content)
fmt.Printf("Cost: $%.6f\n\n", result.Cost.TotalCost)
}

Problem: Invalid API key.

Solution: Check your API key is set:

Terminal window
echo $OPENAI_API_KEY

Problem: Request took too long.

Solution: Increase timeout:

ctx, cancel := context.WithTimeout(context.Background(), 30*time.Second)
defer cancel()

Problem: Using expensive model or many tokens.

Solution:

  • Use gpt-4o-mini instead of gpt-4o
  • Reduce MaxTokens to 100-300
  • Monitor costs with result.Cost.TotalCost

Here’s the full application with better structure:

package main
import (
"context"
"fmt"
"log"
"os"
"time"
"github.com/AltairaLabs/PromptKit/runtime/pipeline"
"github.com/AltairaLabs/PromptKit/runtime/pipeline/middleware"
"github.com/AltairaLabs/PromptKit/runtime/providers/openai"
)
func main() {
// Validate API key
apiKey := os.Getenv("OPENAI_API_KEY")
if apiKey == "" {
log.Fatal("OPENAI_API_KEY environment variable not set")
}
// Create provider
provider := openai.NewOpenAIProvider(
"openai",
"gpt-4o-mini",
apiKey,
openai.DefaultProviderDefaults(),
false,
)
defer provider.Close()
// Build pipeline
config := &middleware.ProviderMiddlewareConfig{
MaxTokens: 500,
Temperature: 0.7,
}
pipe := pipeline.NewPipeline(
middleware.ProviderMiddleware(provider, nil, nil, config),
)
defer pipe.Shutdown(context.Background())
// Create context with timeout
ctx, cancel := context.WithTimeout(context.Background(), 30*time.Second)
defer cancel()
// Get prompt from command line or use default
prompt := "What is artificial intelligence?"
if len(os.Args) > 1 {
prompt = os.Args[1]
}
// Execute request
fmt.Printf("Prompt: %s\n\n", prompt)
result, err := pipe.Execute(ctx, "user", prompt)
if err != nil {
log.Fatalf("Request failed: %v", err)
}
// Display results
fmt.Printf("Response:\n%s\n\n", result.Response.Content)
fmt.Printf("--- Metrics ---\n")
fmt.Printf("Input tokens: %d\n", result.Response.Usage.PromptTokens)
fmt.Printf("Output tokens: %d\n", result.Response.Usage.CompletionTokens)
fmt.Printf("Total tokens: %d\n", result.Response.Usage.TotalTokens)
fmt.Printf("Cost: $%.6f\n", result.Cost.TotalCost)
}

Run with custom prompt:

Terminal window
go run main.go "Explain quantum computing in simple terms"

✅ Create and configure a pipeline
✅ Connect to an LLM provider
✅ Execute basic requests
✅ Handle responses
✅ Track token usage and costs
✅ Adjust model parameters

Continue to Tutorial 2: Multi-Turn Conversations to add conversation state and build a chatbot.