Simple AI text improvement through research-backed critique with complete observability
Sifaka improves AI-generated text through iterative critique using research-backed techniques. Instead of hoping your AI output is good enough, Sifaka provides a transparent feedback loop where AI systems validate and improve their own outputs.
Core Value: See exactly how AI improves your text through research-backed techniques with complete audit trails.
# Install from PyPI
pip install sifaka
# Or with uv
uv pip install sifaka
Sifaka requires an LLM API key. Set one of these environment variables:
export OPENAI_API_KEY="your-api-key" # For OpenAI (GPT-4, etc.)
# or
export ANTHROPIC_API_KEY="your-api-key" # For Claude
# or
export GEMINI_API_KEY="your-api-key" # For Google Gemini
# or
export GROQ_API_KEY="your-api-key" # For Groq
# or (for local Ollama - no API key needed)
export OLLAMA_BASE_URL="http://localhost:11434/v1" # Optional, defaults to localhost
Or create a .env
file in your project:
OPENAI_API_KEY=your-api-key
Using Ollama (Local LLMs):
from sifaka import improve_sync, Config
from sifaka.core.config import LLMConfig
# Use Ollama with specific model (must set critic_model too!)
config = Config(
llm=LLMConfig(
provider="ollama",
model="mistral:latest",
critic_model="mistral:latest" # Important: set this to use Ollama for critiques
)
)
result = improve_sync("Climate change is bad.", config=config)
from sifaka import improve_sync
# Simple one-liner
result = improve_sync("Climate change is bad.")
print(result.final_text)
Sifaka implements these peer-reviewed techniques:
git clone https://github.com/sifaka-ai/sifaka
cd sifaka
pip install -e ".[dev]"
pytest
See CONTRIBUTING.md for development guidelines.
MIT License - see LICENSE file for details.