Container Deployment#
What youβll learn: How to deploy and manage containerized services using the Osprey Frameworkβs deployment CLI
π What Youβll Learn
Key Concepts:
Using
osprey deployCLI for service deployment and orchestrationConfiguring services in your projectβs
config.ymlManaging Jinja2 template rendering with
docker-compose.yml.j2filesUnderstanding build directory management and source code copying
Implementing development vs production deployment patterns
Prerequisites: Understanding of Docker/container concepts and Configuration System
Time Investment: 30-45 minutes for complete understanding
Overview#
The Osprey Framework provides a container management system for deploying services. The system handles service discovery, Docker Compose template rendering, and container orchestration using Docker or Podman with native compose support.
Core Features:
Runtime Flexibility: Automatic detection and use of Docker or Podman
Simple Service Configuration: All services defined in a flat
services:sectionTemplate Rendering: Jinja2 processing of Docker Compose templates with full configuration context
Build Management: Automated build directory creation with source code and configuration copying
Container Orchestration: Docker Compose or Podman Compose integration for multi-service deployment
Architecture#
The container management system uses a simple, flat directory structure. All services live in your projectβs services/ directory and are configured the same way.
Common Services:
- Framework Infrastructure Services:
Core services used across applications:
jupyter: Python execution environment with EPICS supportopen-webui: Web interface for agent interactionpipelines: Processing pipeline infrastructure
- Application-Specific Services:
Custom services for your particular application. Examples from the ALS Accelerator Assistant:
mongo: MongoDB database for ALS operations datapv_finder: EPICS Process Variable discovery MCP serverlangfuse: LLM observability and monitoringAny custom services you create
All services are defined in the same services: section of your config.yml, regardless of whether theyβre framework infrastructure or application-specific.
Service Configuration#
Services are configured in your projectβs config.yml using a simple, flat structure. All servicesβwhether framework infrastructure or application-specificβuse the same configuration format.
Basic Configuration Pattern#
Hereβs the standard pattern used by all framework projects:
# config.yml - Your project configuration
# Define all services in a flat structure
services:
# Jupyter - Python execution environment
jupyter:
path: ./services/jupyter
containers:
read:
name: jupyter-read
hostname: jupyter-read
port_host: 8088
port_container: 8088
execution_modes: ["read_only"]
write:
name: jupyter-write
hostname: jupyter-write
port_host: 8089
port_container: 8088
execution_modes: ["write_access"]
copy_src: true
render_kernel_templates: true
# Open WebUI - User interface frontend
open_webui:
path: ./services/open-webui
hostname: localhost
port_host: 8080
port_container: 8080
# Pipelines - Processing infrastructure
pipelines:
path: ./services/pipelines
port_host: 9099
port_container: 9099
copy_src: true
# Application-specific service example (optional)
# Example: MongoDB for your application data
mongo:
name: mongo
path: ./services/mongo
port_host: 27017
port_container: 27017
copy_src: false
# Control which services to deploy
deployed_services:
- jupyter
- open_webui
- pipelines
# - mongo # Add your application services as needed
Key Configuration Options:
path: Directory containing the serviceβs Docker Compose template (docker-compose.yml.j2)name: Container name (defaults to service key if not specified)hostname: Container hostname for networkingport_host/port_container: Port mapping between host and containercopy_src: Whether to copysrc/directory into the build directory (default: false)additional_dirs: Extra directories to copy to build environment (list)render_kernel_templates: Process Jupyter kernel templates (for Jupyter services only)containers: Multi-container configuration (for services like Jupyter with read/write variants)
Service Directory Organization#
Your project organizes services in a flat directory structure:
your-project/
βββ services/
β βββ docker-compose.yml.j2 # Root network configuration
β βββ jupyter/ # Jupyter service
β β βββ docker-compose.yml.j2
β β βββ Dockerfile
β β βββ custom_start.sh
β β βββ python3-epics-readonly/
β βββ open-webui/ # Web UI service
β β βββ docker-compose.yml.j2
β β βββ Dockerfile
β β βββ functions/
β βββ pipelines/ # Processing pipeline service
β β βββ docker-compose.yml.j2
β β βββ main.py
β βββ mongo/ # (Optional) Application services
β βββ docker-compose.yml.j2 # E.g., MongoDB for ALS Assistant
β βββ Dockerfile
βββ config.yml
βββ src/
βββ your_app/
- Each service directory contains:
docker-compose.yml.j2(required): Jinja2 template for Docker ComposeDockerfile(optional): If the service needs a custom imageOther service-specific files (scripts, configs, etc.)
The path field in your configuration points to these service directories.
Deployment Workflow#
The container management system supports both development and production deployment patterns.
New in v0.7+: Framework CLI Commands
Service deployment is now managed through the osprey deploy CLI command.
Development Pattern#
For development and debugging, start services incrementally:
Configure services incrementally in
config.yml:deployed_services: - open_webui # Start with one service
Start in non-detached mode to monitor logs:
osprey deploy up
Add additional services after verifying each one works correctly
Production Pattern#
For production deployment:
Configure all required services in
config.yml:deployed_services: - jupyter - open_webui - pipelines
Start all services in detached mode:
osprey deploy up --detached
Verify services are running:
podman ps
Development Mode#
Development mode enables testing Osprey framework changes in containers without publishing to PyPI. When enabled with the --dev flag, containers use your locally installed Osprey instead of the PyPI version.
When to Use:
Testing framework modifications before release
Debugging framework internals within container environments
Contributing to framework development
Validating framework changes across containerized services
How It Works:
The deployment system automatically:
Locates your locally installed framework package
Copies the framework source code to the build directory
Sets the
DEV_MODEenvironment variable for containersContainers install the local framework copy instead of PyPI version
Usage:
# Deploy with local framework (foreground)
osprey deploy up --dev
# Deploy with local framework (background)
osprey deploy up --detached --dev
Verification:
After deploying in development mode, verify the framework source was copied:
# Check for osprey override directory
ls build/services/jupyter/osprey_override/
# Check environment variable in container
podman exec jupyter-read env | grep DEV_MODE
Fallback Behavior:
If the local framework cannot be located or copied:
The system prints a warning message
Containers fall back to installing from PyPI
Deployment continues normally
This ensures deployments succeed even if development mode setup fails.
Docker Compose Templates#
Services use Jinja2 templates for Docker Compose file generation. Templates have access to your complete configuration context.
Template Structure#
Templates are located at {service_path}/docker-compose.yml.j2. Hereβs a complete example:
# services/jupyter/docker-compose.yml.j2
services:
jupyter-read:
container_name: {{services.jupyter.containers.read.name}}
build:
context: ./jupyter
dockerfile: Dockerfile
restart: unless-stopped
ports:
- "{{services.jupyter.containers.read.port_host}}:{{services.jupyter.containers.read.port_container}}"
volumes:
- ./jupyter:/jupyter
- {{project_root}}/{{file_paths.agent_data_dir}}/{{file_paths.executed_python_scripts_dir}}:/home/jovyan/work/executed_scripts/
environment:
- NOTEBOOK_DIR=/home/jovyan/work
- JUPYTER_ENABLE_LAB=yes
- PYTHONPATH=/jupyter/repo_src
- TZ={{system.timezone}}
- HTTP_PROXY=${HTTP_PROXY}
- NO_PROXY=${NO_PROXY}
networks:
- osprey-network
Template Features:
Configuration Access: Full configuration available as Jinja2 variables
Access services:
{{services.service_name.option}}Access file paths:
{{file_paths.agent_data_dir}}Access system config:
{{system.timezone}}Access project root:
{{project_root}}
Environment Variables: Reference host environment via
${VAR_NAME}Networking: All services automatically join the
osprey-networkVolume Management: Dynamic volume mounting based on configuration
Template Access Patterns#
Common template patterns for accessing configuration:
# Access service configuration
ports:
- "{{services.my_service.port_host}}:{{services.my_service.port_container}}"
# Access nested service config (like Jupyter containers)
container_name: {{services.jupyter.containers.read.name}}
# Access file paths
volumes:
- {{project_root}}/{{file_paths.agent_data_dir}}:/data
# Access system configuration
environment:
- TZ={{system.timezone}}
# Access custom configuration
environment:
- DATABASE_URL={{database.connection_string}}
Deployment CLI Usage#
Deploy services using the osprey deploy command.
Basic Commands#
# Start services in foreground (see logs in terminal)
osprey deploy up
# Start services in background (detached mode)
osprey deploy up --detached
# Stop services
osprey deploy down
# Restart services
osprey deploy restart
# Show service status
osprey deploy status
# Clean deployment (remove containers and volumes)
osprey deploy clean
# Rebuild containers from scratch
osprey deploy rebuild
Service Status Display#
The status command displays detailed information about all deployed services in a formatted table with visual indicators:
osprey deploy status
Example Output:
Service Deployment Status
βββββββββββββββββ³ββββββββββββββ³βββββββββββββββββ³βββββββββββββββββ³βββββββββββββββββ
β Service β Project β Status β Ports β Image β
β‘βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ©
β open-webui β weather-agβ¦ β β Running β 8080β8080 β ghcr.io/... β
β pipelines β weather-agβ¦ β β Running β 9099β9099 β local/... β
β jupyter-read β weather-agβ¦ β β Running β 8088β8088 β local/... β
β jupyter-write β weather-agβ¦ β β Running β 8089β8088 β local/... β
β mongo β als-assistβ¦ β β Stopped β 27017β27017 β mongo:latest β
βββββββββββββββββ΄ββββββββββββββ΄βββββββββββββββββ΄βββββββββββββββββ΄βββββββββββββββββ
New in v0.8.2: Project Tracking
The status display now includes project ownership tracking using Docker labels. This enables multi-project deployments where you can identify which project/agent owns each container.
The status display includes:
Service: Container name
Project: Project/agent name (from
project_namein config.yml)Status: Running (β) or Stopped (β) with visual indicator
Ports: Port mappings (hostβcontainer)
Image: Container image used
Health: Health check status (if configured)
Multi-Project Status:
You can check status for specific projects using the --project flag:
# Check status of specific project
osprey deploy status --project ~/projects/weather-agent
# Check multiple projects
osprey deploy status --project ~/projects/agent1
osprey deploy status --project ~/projects/agent2
Container Labels and Filtering#
All deployed containers are automatically labeled with project metadata using Docker labels. This enables advanced container management and filtering.
Container Labels:
Each container gets three automatic labels:
osprey.project.name- Project identifier (fromproject_namein config.yml)osprey.project.root- Absolute path to project directoryosprey.deployed.at- ISO 8601 timestamp of deployment
Query Containers by Project:
# List all Osprey containers
podman ps --filter label=osprey.project.name
# List containers for specific project
podman ps --filter label=osprey.project.name=weather-agent
# Inspect container labels
podman inspect jupyter-read | grep osprey
Use Cases:
Multi-project deployments: Run multiple agent projects simultaneously
Container identification: Quickly identify which project owns containers
Automation: Script container management based on project labels
Debugging: Filter logs and status by project
Command Options#
# Use custom configuration file
osprey deploy up --config my-config.yml
# Deploy with local framework (development mode)
osprey deploy up --dev
# Expose services to all network interfaces (use with caution!)
osprey deploy up --expose
# Restart in detached mode
osprey deploy restart --detached
# Rebuild and start in detached mode with local framework
osprey deploy rebuild --detached --dev
Deployment Workflow Details#
When you run osprey deploy up, the container manager follows this workflow:
Configuration Loading: Load and merge configuration files
Service Discovery: Read
deployed_serviceslist to identify active servicesBuild Directory Creation: Create clean build directories for each service
Template Processing: Render Jinja2 templates with complete configuration context
File Copying: Copy service files, source code, and additional directories
Configuration Flattening: Generate self-contained config files for containers
Container Orchestration: Execute Docker/Podman Compose with generated files
Generated Build Directory:
build/services/
βββ docker-compose.yml # Root network configuration
βββ jupyter/
β βββ docker-compose.yml # Rendered Jupyter service config
β βββ Dockerfile
β βββ custom_start.sh
β βββ python3-epics-readonly/
β β βββ kernel.json # Rendered kernel config
β βββ python3-epics-write/
β β βββ kernel.json
β βββ repo_src/ # Copied source code (if copy_src: true)
β β βββ your_app/
β β βββ requirements.txt
β βββ config.yml # Flattened configuration
βββ open-webui/
β βββ docker-compose.yml
β βββ ...
βββ pipelines/
βββ docker-compose.yml
βββ repo_src/ # Copied source code
βββ config.yml
Container Networking#
Services communicate through container networks using service names as hostnames.
Service Communication#
Container-to-container communication uses service names:
OpenWebUI to Pipelines:
http://pipelines:9099Pipelines to Jupyter:
http://jupyter-read:8088Application to MongoDB (ALS Assistant):
mongodb://mongo:27017Application to PV Finder (ALS Assistant):
http://pv-finder:8051
Host Access from Containers#
For containers to access services running on the host (like Ollama):
Use
host.docker.internalinstead oflocalhost(Docker on macOS/Windows)Use
host.containers.internalon Linux with PodmanExample:
http://host.docker.internal:11434for Ollama
# In docker-compose.yml.j2
environment:
- OLLAMA_BASE_URL=http://host.docker.internal:11434 # Use host.containers.internal for Podman
Network Binding and Security#
Changed in version 0.10.7: Services now bind to 127.0.0.1 (localhost only) by default. Previous versions bound to 0.0.0.0, exposing services to all network interfaces.
By default, all deployed services bind to localhost only (127.0.0.1). This means services are only accessible from the machine running the containers, not from external hosts on the network.
Danger
Versions v0.10.6 and earlier bound services to 0.0.0.0 by default, exposing them to all network interfaces. If you were running an earlier version:
Stop all Osprey-related containers immediately
Update your Osprey installation
Redeploy from scratch so the new secure defaults take effect
Rotate any API keys (OpenAI, Anthropic, CBorg, etc.) that were configured as environment variables
Consider running a security scan on machines that were deployed on public networks
Exposing Services to the Network#
If you intentionally need services accessible from external hosts (e.g., cloud deployments, shared lab servers), use the --expose flag:
# Expose services to all network interfaces (0.0.0.0)
osprey deploy up --expose
# Combine with other flags
osprey deploy up --detached --expose
osprey deploy rebuild --expose
Warning
The --expose flag binds services to 0.0.0.0, making them accessible from any machine on the network. Only use this when:
You have proper authentication configured on all exposed services
Your deployment is behind a firewall or in an isolated network
You understand the security implications of exposing open ports
Do not use this flag in development unless you specifically need network access from other machines.
You can also set the bind address persistently in your config.yml:
# config.yml
deployment:
bind_address: "127.0.0.1" # Default: localhost only
# bind_address: "0.0.0.0" # Caution: exposes to all interfaces
The --expose CLI flag overrides the config.yml setting for that invocation.
Docker Compose Deployments Only#
The --expose flag only applies to osprey deploy up (Docker/Podman Compose deployments). If you manage containers through other orchestration systems (e.g., Kubernetes, Rancher), network binding is controlled by your orchestratorβs configuration instead.
Port Mapping#
Services expose ports to the host system for external access:
# Host access through mapped ports
services:
open_webui:
ports:
- "8080:8080" # Access at http://localhost:8080
Common port mappings:
OpenWebUI:
8080:8080βhttp://localhost:8080Jupyter Read:
8088:8088βhttp://localhost:8088Jupyter Write:
8089:8088βhttp://localhost:8089Pipelines:
9099:9099βhttp://localhost:9099
Advanced Configuration#
Environment Variables#
The container manager automatically loads environment variables from .env:
# .env file - Services will have access to these variables
OPENAI_API_KEY=your_key_here
ANTHROPIC_API_KEY=your_key_here
CBORG_API_KEY=your_key_here
PROJECT_ROOT=/absolute/path/to/project
LOCAL_PYTHON_VENV=/path/to/venv/bin/python
These variables are:
Available to Docker Compose via
${VAR_NAME}syntaxCan be passed to containers via
environment:sectionsUsed by configuration system for variable expansion
Build Directory Customization#
Change the build output directory:
# config.yml
build_dir: "./custom_build"
Source Code Integration#
Control whether services get a copy of your src/ directory:
services:
pipelines:
copy_src: true # Copies src/ to build/services/pipelines/repo_src/
This is useful for:
Pipelines server (needs access to your application code)
Jupyter containers (needs your application for interactive development)
Services that execute your application code
Services that donβt need source code (databases, UI-only services) should set copy_src: false.
Additional Directories#
Copy extra directories into service build environments:
services:
jupyter:
additional_dirs:
# Simple directory copy
- docs
# Custom source -> destination mapping
- src: "_agent_data"
dst: "agent_data"
# Copy framework source (useful during development)
- src: ../osprey/src/osprey
dst: osprey_src/src/osprey
This is commonly used for:
Documentation that services need
Data directories
Configuration files
Osprey source during development (before osprey is pip-installable)
Build Directory Management#
The container manager creates complete, self-contained build environments for each service.
Build Process#
For each deployed service, the build process:
Clean Build Directory: Remove existing build directory for clean deployment
Render Docker Compose Template: Process Jinja2 template with configuration
Copy Service Files: Copy all files from service directory (except
.j2templates)Copy Source Code: If
copy_src: true, copy entiresrc/directoryCopy Additional Directories: Copy any directories specified in
additional_dirsCreate Flattened Configuration: Generate self-contained
config.ymlfor the containerProcess Kernel Templates: If
render_kernel_templates: true, render Jupyter kernel configs
Source Code Handling:
When copy_src: true:
Source code is copied to
build/services/{service}/repo_src/Global
requirements.txtis automatically copiedProjectβs
pyproject.tomlis copied aspyproject_user.tomlContainers set
PYTHONPATHto include the copied source
Configuration Flattening:
Each service gets a config.yml with:
All imports resolved and merged
Complete, self-contained configuration
Registry paths adjusted for container environment
No import directives (everything is flattened)
Working Examples#
Complete Control Assistant Example#
This example shows a complete working configuration from the Control Assistant tutorial:
# config.yml
project_name: "my-control-assistant"
build_dir: ./build
project_root: /home/user/my-control-assistant
registry_path: ./src/my_control_assistant/registry.py
services:
jupyter:
path: ./services/jupyter
containers:
read:
name: jupyter-read
hostname: jupyter-read
port_host: 8088
port_container: 8088
execution_modes: ["read_only"]
write:
name: jupyter-write
hostname: jupyter-write
port_host: 8089
port_container: 8088
execution_modes: ["write_access"]
copy_src: true
render_kernel_templates: true
open_webui:
path: ./services/open-webui
hostname: localhost
port_host: 8080
port_container: 8080
pipelines:
path: ./services/pipelines
port_host: 9099
port_container: 9099
copy_src: true
deployed_services:
- jupyter
- open_webui
- pipelines
system:
timezone: ${TZ:-America/Los_Angeles}
file_paths:
agent_data_dir: _agent_data
executed_python_scripts_dir: executed_scripts
Deploy this configuration:
osprey deploy up --detached
# Services available:
# - OpenWebUI: http://localhost:8080
# - Jupyter Read: http://localhost:8088
# - Jupyter Write: http://localhost:8089
# - Pipelines: http://localhost:9099
Troubleshooting#
Services fail to start:
Check individual service logs:
podman logs <container_name>Verify configuration syntax in
config.ymlEnsure required environment variables are set in
.envTry starting services individually to isolate issues
Check that service paths exist and contain
docker-compose.yml.j2
Port conflicts:
Check for processes using required ports:
lsof -i :8080Update port mappings in service configurations
Ensure no other containers are using the same ports
Verify
deployed_servicesdoesnβt have duplicate services
Container networking issues:
Verify service names match configuration
Use container network names (e.g.,
pipelines) notlocalhostCheck firewall settings if accessing from external systems
Ensure all services use the
osprey-network
Template rendering errors:
Verify Jinja2 syntax in template files (
{{variable}}not{variable})Check that configuration values exist before accessing them
Review template paths in error messages
Inspect generated files in
build/directory
Service not found in configuration:
Verify service is defined in
services:sectionCheck service name matches between
services:anddeployed_services:Ensure
deployed_serviceslist uses correct service names
Template file not found:
Verify
docker-compose.yml.j2exists at thepathlocationCheck that the service
pathis correct relative to your project rootEnsure you havenβt accidentally specified a directory that doesnβt exist
Copy source failures:
Verify
src/directory exists ifcopy_src: trueCheck permissions on source directories
Ensure additional_dirs paths exist
Development mode issues:
Verify osprey is installed in your active virtual environment
Check that
osprey_override/directory exists in build after deploymentConfirm
DEV_MODE=trueis set in container environmentIf osprey source not found, containers will fall back to PyPI version
Review console output for osprey copy warnings during deployment
List running containers:
podman ps
View container logs:
podman logs <container_name>
podman logs -f <container_name> # Follow logs in real-time
Inspect container configuration:
podman inspect <container_name>
Network inspection:
podman network ls
podman network inspect osprey-network
Check generated configuration:
# Review rendered Docker Compose files
cat build/services/jupyter/docker-compose.yml
# Check flattened configuration
cat build/services/pipelines/config.yml
Check for port conflicts:
lsof -i :8080 # Check specific port
netstat -tulpn | grep :8080 # Alternative method
Test network connectivity:
# Test container-to-container communication
podman exec pipelines ping jupyter-read
podman exec pipelines curl http://open-webui:8080
Rebuild after configuration changes:
# Full rebuild (safest after config changes)
osprey deploy clean
osprey deploy up --detached
# Or use rebuild command (clean + up in one step)
osprey deploy rebuild --detached
Verify development mode:
# Check if osprey override was copied
ls -la build/services/jupyter/osprey_override/
# Verify DEV_MODE environment variable in container
podman exec jupyter-read env | grep DEV_MODE
# Check osprey installation in container
podman exec jupyter-read pip show osprey-framework
Common Commands:
# Start services (localhost only by default)
osprey deploy up
osprey deploy up --detached
# Start services exposed to network (use with caution!)
osprey deploy up --expose
# Stop services
osprey deploy down
# Check status
osprey deploy status
podman ps
# View logs
podman logs <container_name>
podman logs -f <container_name>
# Clean restart
osprey deploy clean
osprey deploy up --detached
Common Service Names:
jupyter-read- Jupyter read-only containerjupyter-write- Jupyter write-access containeropen-webui- Web interfacepipelines- Processing pipelinemongo- MongoDB (ALS Assistant)pv-finder- PV Finder MCP (ALS Assistant)
Common Ports:
8080- OpenWebUI8088- Jupyter (read-only)8089- Jupyter (write-access)9099- Pipelines27017- MongoDB8051- PV Finder
Development:
Start minimal: Begin with one service, verify it works, then add more
Use foreground mode: Run
osprey deploy up(not detached) during development to see logsTest services individually: Deploy services one at a time to isolate issues
Keep build directory in .gitignore: Build artifacts shouldnβt be version controlled
Use meaningful container names: Makes logs and debugging easier
Use development mode for framework changes: Run
osprey deploy up --devwhen testing osprey modificationsVerify development mode: Check console output for osprey copy messages
Production:
Use detached mode: Run
osprey deploy up --detachedfor productionKeep services on localhost by default: Only use
--exposewhen you need network access and have authentication configuredMonitor container resources: Use
podman statsto watch resource usageImplement health checks: Add health check configurations to your docker-compose templates
Plan restart policies: Use
restart: unless-stoppedin production templatesRegular backups: Back up data volumes for database services
Document deployment: Keep notes on deployment procedures and configurations
Configuration:
Keep secrets in .env: Never commit API keys or passwords to version control
Use absolute paths: Set
project_rootas absolute path in configTest changes incrementally: Test configuration changes in development first
Version control configs: Track
config.ymland templates in gitDocument custom modifications: Comment any non-standard template changes
Validate before deploying: Check YAML syntax before running deploy commands
Template Development:
Test templates incrementally: Verify each configuration value exists
Use descriptive variable names: Clear naming makes templates maintainable
Add comments: Document non-obvious template logic
Check rendered output: Review files in
build/after changesHandle missing values gracefully: Use Jinja2 defaults:
{{value|default('fallback')}}
See also
- Configuration System
Advanced configuration patterns and variable expansion
- Container Management
Container management API reference