Production Systems#

What You’ll Learn

Enterprise-Grade Production Architecture:

  • LangGraph-native approval workflows with configurable security policies

  • Multi-source data integration through provider framework patterns

  • Container-isolated Python execution with security analysis and EPICS integration

  • Persistent memory storage with cross-session context preservation

  • Complete container management and service orchestration for scalable deployment

Prerequisites: Solid understanding of Infrastructure Components and production deployment concepts

Target Audience: DevOps engineers, system administrators, and architects deploying agentic systems in production environments

The Osprey Framework offers enterprise-grade infrastructure components designed for secure and scalable deployment of agentic systems. These production-ready systems ensure human oversight, data integration, secure execution, and orchestration capabilities essential for high-stakes environments. By implementing a Security-First, Approval-Centric Architecture, the framework delivers robust capabilities while maintaining the flexibility needed for diverse deployment scenarios.

Core Production Components#

🛡️ Human Approval Workflows

LangGraph-native interrupts with configurable policies, rich context, and fail-secure defaults for production environments.

Human Approval
🔗 Data Source Integration

Data retrieval from multiple sources with provider framework and intelligent discovery mechanisms.

Data Integration
🐍 Python Execution Service

Pluggable code generation (Basic LLM, Claude Code, Mock), security analysis, and flexible execution environments.

Python Execution
🧠 Memory Storage Service

Persistent User Memory

File-based storage with framework integration and cross-session context preservation.

Memory Storage
🚀 Container & Deployment

Complete container management with template rendering and hierarchical service discovery.

Container Deployment
🎛️ Control System Integration

Pluggable connectors for control systems (EPICS, LabVIEW, Tango, Mock) for development and production deployment.

Control System Integration

Production Integration Patterns#

High-level execution plan approval with planning mode:

# User enables planning mode
user_input = "/planning Analyze beam performance and adjust parameters"

# Agent processes input
state_updates = await agent.ainvoke(
    {"user_query": user_input},
    config={"thread_id": "session_123"}
)

# Orchestrator automatically:
# 1. Generates execution plan using LLM
# 2. Validates capabilities exist
# 3. Creates approval interrupt (planning mode enabled)
# 4. Waits for user approval

# After approval, orchestrator executes planned steps
  • Automatic plan generation using orchestrator LLM

  • LangGraph-native interrupts for plan approval

  • Resumable workflow after user approval/rejection

  • File-based plan storage for review and editing

Service-based Python execution with automatic approval handling:

from osprey.registry import get_registry
from osprey.services.python_executor import PythonExecutionRequest
from osprey.approval import handle_service_with_interrupts

# Get service from registry
registry = get_registry()
python_service = registry.get_service("python_executor")

# Create execution request
request = PythonExecutionRequest(
    user_query="Read EPICS beam current and create plot",
    task_objective="Analyze beam data",
    capability_prompts=["Use pyepics for PV access"],
    execution_folder_name="beam_analysis"
)

# Service handles generation, analysis, approval, execution
result = await handle_service_with_interrupts(
    service=python_service,
    request=request,
    config=service_config,
    logger=logger,
    capability_name="BeamAnalysis"
)

# Access results
data = result.execution_result.results
  • Pluggable code generators (Basic LLM, Claude Code, Mock)

  • Automatic pattern detection for EPICS reads/writes

  • Configurable approval modes (disabled, epics_writes, all_code)

  • Container or local execution with seamless switching

Unified data access through the provider framework:

# Parallel data retrieval pattern
data_context = await data_manager.retrieve_all_context(
    DataSourceRequest(query=task.description)
)

# Available to all capabilities automatically
user_memory = data_context.get("core_user_memory")
domain_data = data_context.get("custom_provider")
  • Automatic provider discovery through registry system

  • Parallel retrieval with timeout management

  • Type-safe integration with capability context

Coordinated deployment and management of production services:

# Container management using the function-based system
from osprey.deployment.container_manager import find_service_config, setup_build_dir

# Deploy services by configuring them in deployed_services list
deployed_services = [
    "osprey.pipelines",
    "osprey.jupyter"
]

# Services are deployed through container_manager.py script
# python container_manager.py config.yml up -d

# Service management through compose files
for service_name in deployed_services:
    service_config, template_path = find_service_config(config, service_name)
    if service_config and template_path:
        compose_file = setup_build_dir(template_path, config, service_config)
  • Hierarchical service discovery through osprey.* and applications.* naming

  • Template-based configuration for environment-specific deployments

  • Podman Compose orchestration with multi-file support

Persistent user context with intelligent retrieval:

# Memory-enhanced capability execution
@capability_node
class DataAnalysisCapability(BaseCapability):
    async def execute(state: AgentState, **kwargs):
        # Retrieve user memory through data source integration
        data_manager = get_data_source_manager()
        requester = DataSourceRequester("capability", "data_analysis")
        request = create_data_source_request(state, requester)
        retrieval_result = await data_manager.retrieve_all_context(request)

        # Access memory context from data sources
        user_memory_context = retrieval_result.context_data.get("core_user_memory")
        if user_memory_context:
            user_memories = user_memory_context.data  # UserMemories object
            # Use memory data to enhance analysis
  • Data source integration for automatic memory context injection

  • Persistent memory storage through UserMemoryProvider

  • Framework-native memory operations through MemoryOperationsCapability