sifaka

Configuration Guide

Sifaka offers flexible configuration options to customize text improvement behavior.

Configuration Overview

Configuration can be set at multiple levels:

  1. Function parameters (highest priority)
  2. Config object
  3. Environment variables
  4. Defaults

Basic Configuration

Using Function Parameters

from sifaka import improve

result = await improve(
    "Your text",
    model="gpt-4",
    temperature=0.8,
    max_iterations=5
)

Using Config Object

from sifaka import improve
from sifaka.core.config import Config, LLMConfig

config = Config(
    llm=LLMConfig(
        model="gpt-4",
        temperature=0.8
    )
)

result = await improve("Your text", config=config)

Configuration Options

LLM Configuration

Controls language model behavior:

from sifaka.core.config import LLMConfig

llm_config = LLMConfig(
    model="gpt-4o-mini",           # Model to use
    critic_model="gpt-3.5-turbo",  # Different model for critics
    temperature=0.7,               # Creativity (0.0-2.0)
    max_tokens=2000,              # Max response length
    timeout_seconds=60.0          # Request timeout
)

Available models:

Critic Configuration

Controls critic behavior:

from sifaka.core.config import CriticConfig
from sifaka.core.types import CriticType

critic_config = CriticConfig(
    critics=[CriticType.SELF_REFINE, CriticType.REFLEXION],
    critic_model="gpt-3.5-turbo",  # Optional: different model for critics
    confidence_threshold=0.6       # Minimum confidence to continue
)

Engine Configuration

Controls the improvement engine:

from sifaka.core.config import EngineConfig

engine_config = EngineConfig(
    max_iterations=3,        # Maximum improvement rounds
    parallel_critics=True,   # Run critics in parallel
    timeout_seconds=120.0    # Overall timeout
)

Complete Configuration Example

from sifaka import improve
from sifaka.core.config import Config, LLMConfig, CriticConfig, EngineConfig
from sifaka.core.types import CriticType

config = Config(
    llm=LLMConfig(
        model="gpt-4",
        temperature=0.8,
        max_tokens=2000,
        timeout_seconds=60.0
    ),
    critic=CriticConfig(
        critics=[CriticType.SELF_REFINE, CriticType.STYLE],
        critic_model="gpt-3.5-turbo",
        confidence_threshold=0.7
    ),
    engine=EngineConfig(
        max_iterations=4,
        parallel_critics=True,
        timeout_seconds=180.0
    )
)

result = await improve("Your text", config=config)

Environment Variables

Set default API keys and configuration:

# API Keys
export OPENAI_API_KEY="your-key"
export ANTHROPIC_API_KEY="your-key"
export GEMINI_API_KEY="your-key"

# Optional: Default model
export SIFAKA_DEFAULT_MODEL="gpt-4o-mini"
export SIFAKA_DEFAULT_TEMPERATURE="0.7"

Model Selection

Choosing the Right Model

For quality:

For speed:

For balance:

Model-Specific Tips

OpenAI:

config = Config(
    llm=LLMConfig(
        model="gpt-4o-mini",
        temperature=0.7  # Good default
    )
)

Anthropic:

config = Config(
    llm=LLMConfig(
        model="claude-3-haiku-20240307",
        temperature=0.6  # Claude prefers lower temps
    )
)

Google:

config = Config(
    llm=LLMConfig(
        model="gemini-1.5-flash",
        temperature=0.8  # Gemini handles higher temps well
    )
)

Temperature Settings

Temperature controls creativity vs consistency:

Temperature by Use Case

# Technical documentation
config = Config(llm=LLMConfig(temperature=0.3))

# Marketing copy
config = Config(llm=LLMConfig(temperature=0.8))

# Creative writing
config = Config(llm=LLMConfig(temperature=1.0))

Performance Optimization

Faster Processing

# Use faster models and fewer iterations
fast_config = Config(
    llm=LLMConfig(
        model="gpt-3.5-turbo",
        timeout_seconds=30
    ),
    engine=EngineConfig(
        max_iterations=2,
        parallel_critics=True
    )
)

Higher Quality

# Use better models and more iterations
quality_config = Config(
    llm=LLMConfig(
        model="gpt-4",
        temperature=0.7
    ),
    critic=CriticConfig(
        critics=[
            CriticType.SELF_REFINE,
            CriticType.REFLEXION,
            CriticType.META_REWARDING
        ]
    ),
    engine=EngineConfig(
        max_iterations=5,
        parallel_critics=False  # Sequential for quality
    )
)

Cost Optimization

# Use different models for generation vs critique
cost_config = Config(
    llm=LLMConfig(
        model="gpt-4o-mini",        # Good generation model
        critic_model="gpt-3.5-turbo" # Cheaper critic model
    )
)

Advanced Configuration

Custom Timeouts

config = Config(
    llm=LLMConfig(
        timeout_seconds=30.0  # Per LLM call timeout
    ),
    engine=EngineConfig(
        timeout_seconds=120.0  # Overall operation timeout
    )
)

Parallel Processing

# Enable parallel critic evaluation
config = Config(
    engine=EngineConfig(
        parallel_critics=True  # Run multiple critics simultaneously
    )
)

Confidence Thresholds

# Stop early if critics are confident
config = Config(
    critic=CriticConfig(
        confidence_threshold=0.8  # Stop if 80% confident
    )
)

Configuration Patterns

Development Configuration

dev_config = Config(
    llm=LLMConfig(
        model="gpt-3.5-turbo",
        temperature=0.5  # Consistent for testing
    ),
    engine=EngineConfig(
        max_iterations=1,  # Fast feedback
        timeout_seconds=30.0
    )
)

Production Configuration

prod_config = Config(
    llm=LLMConfig(
        model="gpt-4o-mini",
        temperature=0.7,
        timeout_seconds=60.0
    ),
    critic=CriticConfig(
        critics=[CriticType.SELF_REFINE, CriticType.CONSTITUTIONAL],
        confidence_threshold=0.7
    ),
    engine=EngineConfig(
        max_iterations=3,
        parallel_critics=True,
        timeout_seconds=180.0
    )
)

High-Stakes Configuration

# For critical content (medical, legal, etc.)
critical_config = Config(
    llm=LLMConfig(
        model="gpt-4",
        temperature=0.3  # Low for consistency
    ),
    critic=CriticConfig(
        critics=[
            CriticType.CONSTITUTIONAL,
            CriticType.SELF_RAG,
            CriticType.META_REWARDING
        ],
        confidence_threshold=0.9  # High confidence required
    ),
    engine=EngineConfig(
        max_iterations=5,
        parallel_critics=False  # Sequential for thoroughness
    )
)

Troubleshooting Configuration

Common Issues

Timeouts:

# Increase timeouts for long texts
config = Config(
    llm=LLMConfig(timeout_seconds=120.0),
    engine=EngineConfig(timeout_seconds=300.0)
)

Inconsistent results:

# Lower temperature for consistency
config = Config(
    llm=LLMConfig(temperature=0.3)
)

High costs:

# Use cheaper models and fewer iterations
config = Config(
    llm=LLMConfig(model="gpt-3.5-turbo"),
    engine=EngineConfig(max_iterations=2)
)

Best Practices

  1. Start with defaults: Only configure what you need
  2. Test configurations: Find what works for your use case
  3. Monitor costs: Use appropriate models for your budget
  4. Set timeouts: Prevent runaway operations
  5. Use environment variables: For API keys and defaults
  6. Document your config: Explain why specific settings were chosen