The Agent class is the central orchestrator in Egregore, tying together context management, provider communication, hooks, scaffolds, and workflows. Think of it as the “brain” that coordinates all the framework’s systems.
from egregore import Agent# Create agent with OpenAIagent = Agent(provider="openai:gpt-4")# Make a callresponse = agent.call("Hello! What can you help me with?")print(response)
agent = Agent( provider="openai:gpt-4", system_prompt="You are a helpful coding assistant specialized in Python.")response = agent.call("How do I read a CSV file?")
1. User calls agent.call("message") ↓2. Message added to context at (d0, 0, 0) ↓3. Context hooks fire (before_change) ↓4. MessageScheduler advances episode ↓5. TTL processing + ODI shifting ↓6. Scaffolds render (if reactive) ↓7. Context formatted for provider ↓8. Provider calls AI model ↓9. Response received ↓10. Response added to context ↓11. Context hooks fire (after_change) ↓12. Scaffolds re-render (if needed) ↓13. Response returned to user
Agents maintain conversation history automatically:
Copy
agent = Agent(provider="openai:gpt-4")# Turn 1agent.call("My name is Alice") # Context at d0, 0, 0# Turn 2agent.call("What's my name?") # Previous at d1, 0, 0, new at d0, 0, 0# Agent remembers: "Alice"# Turn 3agent.call("What did I just ask?") # Agent has full history# Previous messages at d1, d2; new at d0
agent = Agent(provider="openai:gpt-4")for chunk in agent.stream("Write a short story"): print(chunk, end="", flush=True)
Async streaming:
Copy
async def stream_response(): agent = Agent(provider="openai:gpt-4") async for chunk in agent.astream("Tell me about quantum computing"): print(chunk, end="", flush=True)asyncio.run(stream_response())
# Start with OpenAIagent = Agent(provider="openai:gpt-4")agent.call("Hello")# Switch to Anthropicagent.config.provider = "anthropic:claude-3-5-sonnet-20241022"agent.call("Continue the conversation")# Context preserved across providers!
agent = Agent( provider="openai:gpt-4", system_prompt="You are a helpful assistant.")# Update system promptagent.config.system_prompt = "You are a coding expert specializing in Python."agent.call("How do I handle exceptions?")
agent = Agent(provider="openai:gpt-4", tools=[get_weather, search_docs])response = agent.call("What's the weather in Tokyo and find docs about PACT?")# Agent orchestrates:# 1. Calls get_weather("Tokyo")# 2. Calls search_docs("PACT")# 3. Synthesizes results into final response
# Good: Specific and clearagent = Agent( provider="openai:gpt-4", system_prompt="You are a Python expert. Provide code examples with explanations. Be concise but thorough.")# Bad: Vagueagent = Agent( provider="openai:gpt-4", system_prompt="Be helpful.")
Leverage scaffolds for persistence
Use scaffolds instead of manually managing state:
Copy
# Good: Scaffold handles persistenceagent = Agent(provider="openai:gpt-4", enable_scaffolds=True)agent.call("Remember my API key is xyz123")# InternalNotesScaffold automatically stores it# Bad: Manual context managementagent.context.pact_insert("d0, 1, 0", TextContent("API key: xyz123"))
Use streaming for long responses
Provide better UX with streaming:
Copy
# Good: Immediate feedbackfor chunk in agent.stream("Write a long essay"): print(chunk, end="", flush=True)# Bad: Wait for entire responseresponse = agent.call("Write a long essay")print(response) # User waits...
Monitor usage for cost control
Track token consumption:
Copy
agent = Agent(provider="openai:gpt-4")# Check usage after callsagent.call("Some message")if agent.usage.total_tokens > 100000: print("Warning: High token usage!")
Use hooks for observability
Add logging and monitoring without modifying core logic:
agent = Agent(provider="openai:gpt-4")# Build context over multiple turnsagent.call("I'm planning a trip to Japan")agent.call("What's the best time to visit?")agent.call("What about cherry blossoms?")agent.call("Book a flight for March") # Agent has full conversation history
# Add persistent user preferencesagent = Agent(provider="openai:gpt-4")preferences = TextContent( content="User prefers: concise answers, code examples, Python 3.11+", key="user_preferences")agent.context.pact_insert("d0, 1, 0", preferences)# All responses now consider preferencesagent.call("How do I read a file?")# Response will be concise with Python 3.11+ examples
agent = Agent(provider="openai:gpt-4")# Capture state before risky operationsnapshot = agent.context.seal(trigger="before_delete")# Perform operationagent.call("Delete all my notes")# Check if something went wrongif problem_occurred: # Restore from snapshot historical = agent.history.at_snapshot(snapshot) agent.context = Context.from_snapshot(historical.model_dump())