Installation & Setup#
What You’ll Learn#
This installation guide covers the complete framework setup process:
Installing Container Runtime - Docker Desktop or Podman for containerized services
Python 3.11 Setup - Virtual environment configuration
Framework Installation - Installing the pip package with all dependencies
Project Creation - Generating a new project from templates
Configuration - Setting up
config.ymland environment variablesService Deployment - Starting containerized services (Jupyter, OpenWebUI, Pipelines)
OpenWebUI Configuration - Chat interface setup and customization
Prerequisites
System Requirements:
Operating System: Linux, macOS, or Windows with WSL2
Admin/sudo access: Required for installing container runtime and Python
Internet connection: For downloading packages and container images
Disk space: At least 5GB free for containers and dependencies
What You’ll Install:
Docker Desktop 4.0+ OR Podman 4.0+ (container runtime)
Python 3.11 (programming language)
Osprey Framework (pip package)
Time estimate: 30-60 minutes for complete setup
Installation Steps#
Install Container Runtime
The framework supports both Docker and Podman. Install either one (not both required):
Installation:
Docker Desktop is the most widely used container platform, providing an integrated experience with native compose support.
Download and install Docker Desktop 4.0+ from the official Docker installation guide.
Verification:
After installation, verify Docker is working:
docker --version
docker compose version
docker run hello-world
Docker Desktop handles the VM setup automatically on macOS/Windows - no additional configuration needed.
Installation:
Podman is a daemonless container engine that provides enhanced security through rootless operation. Unlike Docker, Podman doesn’t require a privileged daemon running as root, offering better privilege separation and a reduced attack surface.
Install Podman 4.0+ from the official Podman installation guide.
Verification:
After installation, verify Podman is working:
podman --version
podman run hello-world
Podman Machine Setup (macOS/Windows only):
On macOS/Windows, initialize and start the Podman machine:
podman machine init
podman machine start
Note: Linux users can skip this step as Podman runs natively on Linux.
Runtime Selection:
The framework automatically detects which runtime is available. To explicitly choose a runtime:
Via configuration: Set
container_runtime: dockerorcontainer_runtime: podmaninconfig.ymlVia environment variable:
export CONTAINER_RUNTIME=dockerorexport CONTAINER_RUNTIME=podman
If both are installed, Docker is preferred by default.
Environment Setup
Python 3.11 Requirement
This framework requires Python 3.11. Verify you have the correct version:
python3.11 --version
Virtual Environment Setup
To avoid conflicts with your system Python packages, create a virtual environment with Python 3.11:
python3.11 -m venv venv
source venv/bin/activate # On Windows: venv\Scripts\activate
Installing the Framework
After creating and activating the virtual environment, install the framework package:
# Upgrade pip to latest version
pip install --upgrade pip
# Install the framework
pip install osprey-framework
New in v0.7+: Pip-Installable Architecture
The framework is now distributed as a pip-installable package with modular dependencies. You no longer need to clone the repository or manage requirements.txt files manually.
Core Dependencies (always installed):
Core Framework: LangGraph, LangChain, Pydantic-AI
AI Providers: OpenAI, Anthropic, Google Generative AI, Ollama
CLI & UI: Rich, Click, prompt_toolkit
Container Runtime: Docker Desktop 4.0+ or Podman 4.0+ (installed separately via system package managers)
Configuration: PyYAML, Jinja2, python-dotenv
Networking: requests, websocket-client
Optional Dependencies (install with extras):
[scientific]: NumPy, Pandas, SciPy, Matplotlib, Seaborn, Scikit-learn, ipywidgets (required for Python execution capability)
[docs]: Sphinx and documentation tools
[dev]: pytest, black, mypy, and development tools
Installation Examples:
# Recommended: Core + scientific computing
pip install osprey-framework[scientific]
# Core + documentation
pip install osprey-framework[docs]
# Everything (includes docs, dev tools, etc.)
pip install osprey-framework[all]
Creating a New Project
New in v0.7.7: Interactive Project Creation
The framework now includes an interactive menu that guides you through project creation with helpful prompts and automatic API key detection. This is the recommended method for new users.
Once the framework is installed, you can create a new project using either the interactive menu or direct CLI commands:
Method 1: Interactive Mode (Recommended for New Users)
Simply run osprey without any arguments to launch the interactive menu:
osprey
The interactive menu will:
Guide you through template selection with descriptions
Help you choose an AI provider (Cborg, OpenAI, Anthropic, etc.)
Let you select from available models
Automatically detect API keys from your environment
Create a ready-to-use project with smart defaults
Method 2: Direct CLI Command
For automation or if you prefer direct commands, use osprey init:
# Create a project with the hello_world_weather template
osprey init my-weather-agent --template hello_world_weather
# Navigate to your project
cd my-weather-agent
Available templates:
minimal- Basic skeleton for starting from scratchhello_world_weather- Simple weather agent (recommended for learning)control_assistant- Production control system integration template
Both methods create identical project structures - choose whichever fits your workflow.
Understand Your Project Structure
The generated project includes all components needed for a complete AI agent application:
Application code (
src/) - Your capabilities, context classes, and business logicService configurations (
services/) - Container configs for Jupyter, OpenWebUI, and PipelinesConfiguration file (
config.yml) - Self-contained application settingsEnvironment template (
.env.example) - API keys and secrets template
Project Structure Example (using hello_world_weather template):
my-weather-agent/
├── src/
│ └── my_weather_agent/
│ ├── __init__.py
│ ├── mock_weather_api.py # Data source
│ ├── context_classes.py # Data models
│ ├── registry.py # Component registration
│ └── capabilities/
│ ├── __init__.py
│ └── current_weather.py # Business logic
├── services/ # Container configurations
├── config.yml # Application settings
└── .env.example # API key template
Want to understand what each component does?
The Hello World Tutorial provides a detailed walkthrough of this structure - explaining what each file does, how components work together, and how to customize them for your needs. If you want to understand the architecture before continuing with deployment, jump to the tutorial now.
Configuration & Environment#
The generated project includes both a config.yml configuration file and a .env.example template for environment variables. Configure both for your environment:
Update config.yml
The generated project includes a complete config.yml file in the project root. All framework settings are pre-configured with sensible defaults. Modify the following settings as needed:
1. Project Root Path
The project_root in config.yml is automatically set to your project directory during framework init. For advanced use cases (multi-environment deployments), you can override this by setting PROJECT_ROOT in your .env file.
2. Ollama Base URL
Set the base URL for Ollama:
For direct host access:
localhost:11434For container-based agents (like OpenWebUI pipelines):
host.containers.internal:11434See Ollama Connection for OpenWebUI-specific configuration
3. Deployed Services
Ensure the following are uncommented in deployed_services:
jupyter- Environment for editing and running generated codeopen_webui- Web-based chat interfacepipelines- Core agent runtime environment
4. API Provider URLs
If using CBorg (LBNL internal only), set the API URL:
Global:
https://api.cborg.lbl.gov/v1Local:
https://api-local.cborg.lbl.gov/v1(requires local network)
In config.yml: api: providers:cborg:base_url: https://api-local.cborg.lbl.gov/v1
5. Model Providers (External Users)
If you don’t have CBorg access, configure alternative providers in config.yml. Update the provider fields under the models section to use openai, anthropic, ollama, etc. Set corresponding API keys in your .env file.
Need Support for Additional Providers?
We’re happy to implement support for additional model providers beyond those currently supported. Many research institutions and national laboratories now operate their own AI/LM services similar to LBNL’s CBorg system. If you need integration with your institution’s internal AI services or other providers, please reach out to us. We can work with you to add native support for your preferred provider.
Environment Variables
The framework uses environment variables for secrets (API keys) and machine-specific settings (file paths, network configuration). This allows you to run the same project on different machines - your laptop, a lab server, or a control room computer - without changing your code or config.yml.
The generated project includes a .env.example template with all supported variables.
When to use .env vs config.yml:
Environment variables (.env): Secrets, absolute paths, proxy settings that change per machine
Configuration file (config.yml): Application behavior, model choices, capabilities that stay the same
Automatic Setup (if API keys are in your environment):
If you already have API keys exported in your shell:
# These are already in your shell environment
export ANTHROPIC_API_KEY=sk-ant-...
export CBORG_API_KEY=...
# When you create a project, the framework automatically creates .env with them!
osprey init my-agent
# or use interactive mode: osprey
The framework will create a .env file automatically with your detected keys.
Manual Setup (if keys are not in environment):
If API keys are not in your environment, set them up manually:
# Copy the template
cp .env.example .env
# Edit with your values
nano .env # or your preferred editor
Required Variables:
API Keys (at least one required):
OPENAI_API_KEYOpenAI API key for GPT models.
Get from: https://platform.openai.com/api-keys
ANTHROPIC_API_KEYAnthropic API key for Claude models.
Get from: https://console.anthropic.com/
GOOGLE_API_KEYGoogle API key for Gemini models.
Get from: https://makersuite.google.com/app/apikey
CBORG_API_KEYCBorg API key (LBNL internal only).
Get from: https://cborg.lbl.gov/
Optional Variables:
OSPREY_PROJECTDefault project directory for CLI commands (new in v0.7.7). Allows working with specific projects without changing directories.
Example:
/home/user/projects/my-agentSee CLI Reference for multi-project workflow examples.
LOCAL_PYTHON_VENVPath to Python virtual environment for local execution mode.
Default: Uses current active environment
TZTimezone for timestamp formatting.
Default:
America/Los_AngelesExample:
UTC,Europe/London,Asia/TokyoCONFIG_FILEOverride config file location (advanced usage).
Default:
config.ymlin current directory
Optional Variables (for advanced use cases):
PROJECT_ROOTOverride the
project_rootvalue fromconfig.yml. Useful for multi-environment deployments or if you move your project directory.Example:
/home/user/my-agent
Network Settings (for restricted environments):
HTTP_PROXYHTTP proxy server URL. Useful in production environments with firewall restrictions (labs, control rooms, corporate networks).
Example:
http://proxy.company.com:8080NO_PROXYComma-separated list of hosts to exclude from proxy.
Example:
localhost,127.0.0.1,.internal
Note
Security & Multi-Machine Workflow:
The framework automatically loads
.envfrom your project rootKeep ``.env`` in ``.gitignore`` to protect secrets from version control
Environment variables in
config.ymlare resolved using${VARIABLE_NAME}syntaxBest practice: Keep one
config.yml(in git), but different.envfiles per machine (NOT in git)Example:
.env.laptop,.env.controlroom,.env.server- copy the appropriate one to.envwhen running on that machine
Documentation#
Compile Documentation (Optional)
If you want to build and serve the documentation locally:
# Install documentation dependencies using optional dependencies
pip install -e ".[docs]"
# Build and serve documentation
python docs/launch_docs.py
Once running, you can view the documentation at http://localhost:8082
Building and Running#
Once you have installed the framework, created a project, and configured it, you can start the services. The framework includes a deployment CLI that orchestrates all services using Podman containers.
Start Services
The framework CLI provides convenient commands for managing services. For detailed information about all deployment options, see Container Deployment or the CLI reference.
New in v0.7+: Framework CLI Commands
Service management is now handled through the osprey deploy command instead of running Python scripts directly.
For initial setup and debugging, start services one by one in non-detached mode:
Comment out all services except one in your
config.ymlunderdeployed_servicesStart the first service:
osprey deploy up
Monitor the logs to ensure it starts correctly
Once stable, stop with
Ctrl+Cand uncomment the next serviceRepeat until all services are working
This approach helps identify issues early and ensures each service is properly configured before proceeding.
Once all services are tested individually, start everything together in detached mode:
osprey deploy up --detached
This runs all services in the background, suitable for production deployments where you don’t need to monitor individual service logs.
Other Deployment Commands
osprey deploy down # Stop all services
osprey deploy restart # Restart services
osprey deploy status # Show service status
osprey deploy clean # Clean deployment
osprey deploy rebuild # Rebuild containers
Verify Services are Running
Check that services are running properly:
# If using Docker
docker ps
# If using Podman
podman ps
Access OpenWebUI
Once services are running, access the web interface at:
OpenWebUI: http://localhost:8080
OpenWebUI Configuration#
OpenWebUI is a feature-rich, self-hosted web interface for Language Models that provides a ChatGPT-like experience with extensive customization options. The framework’s integration provides real-time progress tracking during agent execution, automatic display of registered figures and notebooks, and session continuity across conversations.
Ollama Connection:
For Ollama running on localhost, use http://host.containers.internal:11434 instead of http://localhost:11434 because containers cannot access the host’s localhost directly. This should match your config.yml Ollama base URL setting (see Configuration section above).
Once the correct URL is configured and Ollama is serving, OpenWebUI will automatically discover all models currently available in your Ollama installation.
Pipeline Connection:
The Osprey framework provides a pipeline connection to the OpenWebUI service.
Understanding Pipelines
OpenWebUI Pipelines are a powerful extensibility system that allows you to customize and extend OpenWebUI’s functionality. Think of pipelines as plugins that can:
Filter: Process user messages before they reach the LLM and modify responses after they return
Pipe: Create custom “models” that integrate external APIs, build workflows, or implement RAG systems
Integrate: Connect with external services, databases, or specialized AI providers
Pipelines appear as models with an “External” designation in your model selector and enable advanced functionality like real-time data retrieval, custom processing workflows, and integration with external AI services.
Go to Admin Panel → Settings (upper panel) → Connections (left panel)
Click the (+) button in Manage OpenAI API Connections
Configure the pipeline connection with these details:
URL:
http://pipelines:9099(if using default configuration)API Key: Found in
services/osprey/pipelines/docker-compose.yml.j2underPIPELINES_API_KEY(default0p3n-w3bu!)
Note: The URL uses
pipelines:9099instead oflocalhost:9099because OpenWebUI runs inside a container and communicates with the pipelines service through the container network.
Additional OpenWebUI Configuration:
For optimal performance and user experience, consider these additional configuration settings:
Making Models Public:
To use Ollama models for OpenWebUI features like chat tagging, title generation, and other automated tasks, you must configure them as public models:
Go to Admin Panel → Settings → Models
Find the Ollama model you want to use (e.g.,
mistral:7b,llama3:8b)Click the edit button (pencil icon) next to the model
Ensure the model is activated (enabled)
Set the model visibility to Public (not Private)
Click Save to apply the changes
Deactivating Unused Models:
Deactivate unused (Ollama-)models in Admin Panel → Settings → Models to reduce clutter
This helps keep your model selection interface clean and focused on the models you actually use
You can always reactivate models later if needed
OpenWebUI automatically generates titles and tags for conversations, which can interfere with your main agent’s processing. It’s recommended to use a dedicated local model for this:
Go to Admin Panel → Settings → Interface
Find Task Model setting
Change from Current Model to any local Ollama model (e.g.,
mistral:7b,llama3:8b)This prevents title generation from consuming your main agent’s resources
Note that this model needs to be public as well (see Model Management section to the left).
Adding Custom Function Buttons:
OpenWebUI allows you to add custom function buttons to enhance the user interface. For comprehensive information about functions, see the official OpenWebUI functions documentation.
Installing Functions:
Navigate to Admin Panel → Functions
Add a function using the plus sign (UI details may vary between OpenWebUI versions)
Copy and paste function code from our repository’s pre-built functions
Available Functions in Repository:
The framework includes several pre-built functions located in services/osprey/open-webui/functions/:
execution_history_button.py- View and manage execution historyagent_context_button.py- Access agent context informationmemory_button.py- Memory management functionalityexecution_plan_editor.py- Edit and manage execution plans
Activation Requirements:
After adding a function:
Enable the function - Activate it in the functions interface
Enable globally - Use additional options to enable the function globally
Refresh the page - The button should appear on your OpenWebUI interface after refresh
These buttons provide quick access to advanced agent capabilities and debugging tools.
Real-time Log Viewer:
For debugging and monitoring, use the /logs command in chat to view application logs without accessing container logs directly:
/logs- Show last 100 log entries/logs 50- Show last 50 log entries/logs help- Show available options
This is particularly useful for troubleshooting when OpenWebUI provides minimal feedback by design.
Customizing Default Prompt Suggestions:
OpenWebUI provides default prompt suggestions that you can customize for your specific use case:
Accessing Default Prompts:
Go to Admin Panel → Settings → Interface
Scroll down to find Default Prompt Suggestions section
Here you can see the built-in OpenWebUI prompt suggestions
Customizing Prompts:
Remove Default Prompts: Clear the existing default prompts if they don’t fit your workflow
Add Custom Prompts: Replace them with prompts tailored to your agent’s capabilities
Use Cases:
Production: Set prompts that guide users toward your agent’s core functionalities
R&D Testing: Create prompts that help test specific features or edge cases
Domain-Specific: Add prompts relevant to your application domain (e.g., ALS operations, data analysis)
Example Custom Prompts:
“Analyze the recent beam performance data from the storage ring”
“Find PV addresses related to beam position monitors”
“Generate a summary of today’s logbook entries”
“Help me troubleshoot insertion device issues”
Benefits:
Guides users toward productive interactions with your agent
Reduces cognitive load for new users
Enables consistent testing scenarios during development
Improves user adoption by showcasing agent capabilities
Troubleshooting#
Common Issues:
If you encounter connection issues with Ollama, ensure you’re using
host.containers.internalinstead oflocalhostwhen connecting from containersVerify that all required services are uncommented in
config.ymlCheck that API keys are properly set in the
.envfileEnsure container runtime is running (Docker Desktop or Podman machine on macOS/Windows)
If containers fail to start, check logs with:
docker logs <container_name>orpodman logs <container_name>
Verification Steps:
Check Python version:
python --version(should be 3.11.x)Check container runtime version:
docker --versionorpodman --version(should be 4.0.0+)Verify virtual environment is active (should see
(venv)in your prompt)Test core framework imports:
python -c "import langgraph; print('LangGraph installed successfully')"Test container connectivity:
docker run --rm alpine ping -c 1 host.containers.internal(or usepodmaninstead)Check service status:
docker psorpodman ps
Common Installation Issues:
Python version mismatch: Ensure you’re using Python 3.11 with
python3.11 -m venv venvPackage conflicts: If you get dependency conflicts, try creating a fresh virtual environment
Missing dependencies: The main requirements.txt should install everything needed; avoid mixing with system packages
Next Steps#
See also
- Hello World Tutorial
Build your first simple weather agent
- Production Control Systems Tutorial
Production control system assistant with channel finding and comprehensive tooling
- CLI Reference
Complete CLI command reference
- Configuration System
Deep dive into configuration system
- Registry and Discovery
Understanding the registry and component discovery