Framework Utilities#
Supporting systems for advanced usage and development tooling.
LLM Completion Interface#
Multi-provider LLM completions via LiteLLM for structured generation and direct completions.
- osprey.models.get_chat_completion(message='', max_tokens=1024, model_config=None, provider=None, model_id=None, budget_tokens=None, enable_thinking=False, output_model=None, base_url=None, provider_config=None, temperature=0.0, chat_request=None, tools=None, tool_choice=None)[source]#
Execute direct chat completion requests across multiple AI providers via LiteLLM.
This function provides immediate access to LLM model inference with support for advanced features including extended thinking, structured outputs, and automatic TypedDict conversion.
- Parameters:
message (str) – Input prompt or message for the LLM model
max_tokens (int) – Maximum tokens to generate in the response
model_config (dict | None) – Configuration dictionary with provider and model settings
provider (str | None) – AI provider name (‘anthropic’, ‘google’, ‘openai’, ‘ollama’, ‘cborg’, etc.)
model_id (str | None) – Specific model identifier recognized by the provider
budget_tokens (int | None) – Thinking budget for Anthropic/Google extended reasoning
enable_thinking (bool) – Enable extended thinking capabilities where supported
output_model (type[BaseModel] | None) – Pydantic model or TypedDict for structured output validation
base_url (str | None) – Custom API endpoint, required for Ollama and CBORG providers
provider_config (dict | None) – Optional provider configuration dict with api_key, base_url, etc.
temperature (float) – Sampling temperature (0.0-2.0)
- Raises:
ValueError – If required provider, model_id, api_key, or base_url are missing
- Returns:
Model response (str, Pydantic model, or list of content blocks for thinking)
- Return type:
str | BaseModel | list
Examples
Simple text completion:
>>> from osprey.models import get_chat_completion >>> response = get_chat_completion( ... message="Explain quantum computing", ... provider="anthropic", ... model_id="claude-sonnet-4", ... )
Structured output:
>>> from pydantic import BaseModel >>> class Result(BaseModel): ... summary: str ... confidence: float >>> >>> result = get_chat_completion( ... message="Analyze this data", ... provider="openai", ... model_id="gpt-4o", ... output_model=Result ... )
Developer Tools#
Unified logging system with automatic LangGraph streaming support for framework development.
Logging and Streaming#
The framework provides a unified logging API that automatically handles both CLI output
and web UI streaming. Use logger.status() for high-level updates that should appear
in both interfaces, and standard logging methods (info(), debug()) for detailed
CLI-only output.
Recommended Pattern:
# In capabilities - automatic streaming
logger = self.get_logger()
logger.status("Creating execution plan...") # Logs + streams
logger.info("Active capabilities: [...]") # Logs only
# In other nodes with state
logger = get_logger("orchestrator", state=state)
logger.status("Processing...") # Logs + streams
- osprey.utils.logger.get_logger(component_name=None, level=logging.INFO, *, state=None, name=None, color=None, source=None)[source]#
Get a unified logger that handles both CLI logging and LangGraph streaming.
- Primary API (recommended - use via BaseCapability.get_logger()):
component_name: Component name (e.g., ‘orchestrator’, ‘data_analysis’) state: Optional AgentState for streaming context and step tracking level: Logging level
- Explicit API (for custom loggers or module-level usage):
name: Direct logger name (keyword-only) color: Direct color specification (keyword-only) level: Logging level
- Returns:
ComponentLogger instance that logs to CLI and optionally streams
- Return type:
Examples
# Recommended: Use via BaseCapability class MyCapability(BaseCapability):
- async def execute(self):
logger = self.get_logger() # Auto-streams! logger.status(“Working…”)
# Module-level (no streaming) logger = get_logger(“orchestrator”) logger.info(“Planning started”)
# With streaming (when you have state) logger = get_logger(“orchestrator”, state=state) logger.status(“Creating execution plan…”) # Logs + streams logger.info(“Active capabilities: […]”) # Logs only logger.error(“Failed!”) # Logs + streams
# Custom logger logger = get_logger(name=”test_logger”, color=”blue”)
Deprecated since version The: two-parameter API get_logger(source, component_name) is deprecated. Use get_logger(component_name) instead. The flat configuration structure (logging.logging_colors.{component_name}) replaces the old nested structure.
- class osprey.utils.logger.ComponentLogger(base_logger, component_name, color='white', state=None)[source]#
Bases:
objectRich-formatted logger for Osprey and application components with color coding and message hierarchy.
Now includes optional LangGraph streaming support via lazy initialization.
Message Types: - status: High-level status updates (logs + streams automatically) - key_info: Important operational information - info: Normal operational messages - debug: Detailed tracing information - warning: Warning messages - error: Error messages (logs + streams automatically) - success: Success messages (logs + streams by default) - timing: Timing information - approval: Approval messages - resume: Resume messages
Initialize component logger.
- Parameters:
base_logger (Logger) – Underlying Python logger
component_name (str) – Name of the component (e.g., ‘data_analysis’, ‘router’, ‘mongo’)
color (str) – Rich color name for this component
state (Any) – Optional AgentState for streaming context
- __init__(base_logger, component_name, color='white', state=None)[source]#
Initialize component logger.
- Parameters:
base_logger (Logger) – Underlying Python logger
component_name (str) – Name of the component (e.g., ‘data_analysis’, ‘router’, ‘mongo’)
color (str) – Rich color name for this component
state (Any) – Optional AgentState for streaming context
- emit_event(event)[source]#
Emit a typed OspreyEvent directly.
Use this for structured events like PhaseStartEvent, CapabilityStartEvent, etc. that don’t fit the standard logging pattern.
- Parameters:
event (StatusEvent | PhaseStartEvent | PhaseCompleteEvent | TaskExtractedEvent | CapabilitiesSelectedEvent | PlanCreatedEvent | CapabilityStartEvent | CapabilityCompleteEvent | LLMRequestEvent | LLMResponseEvent | ToolUseEvent | ToolResultEvent | CodeGeneratedEvent | CodeExecutedEvent | CodeGenerationStartEvent | ApprovalRequiredEvent | ApprovalReceivedEvent | ResultEvent | ErrorEvent) – The typed event to emit
Example
from osprey.events import PhaseStartEvent logger.emit_event(PhaseStartEvent(
phase=”task_extraction”, description=”Extracting task from query”
))
- emit_llm_request(prompt, key='', model='', provider='')[source]#
Emit LLMRequestEvent with full prompt for TUI display.
- Parameters:
prompt (str) – The complete LLM prompt text
key (str) – Optional key for accumulating multiple prompts (e.g., capability name)
model (str) – Model identifier (e.g., “gpt-4”, “claude-3-opus”)
provider (str) – Provider name (e.g., “openai”, “anthropic”)
- emit_llm_response(response, key='', duration_ms=0, input_tokens=0, output_tokens=0)[source]#
Emit LLMResponseEvent with full response for TUI display.
- Parameters:
response (str) – The complete LLM response text
key (str) – Optional key for accumulating multiple responses (e.g., capability name)
duration_ms (int) – How long the request took in milliseconds
input_tokens (int) – Number of input tokens
output_tokens (int) – Number of output tokens
- status(message, *args, **kwargs)[source]#
Status update - emits StatusEvent.
User-facing output. Transport is automatic: - During graph.astream(): LangGraph streaming - Outside graph execution: fallback transport → TypedEventHandler
- Parameters:
message (str) – Status message
*args – Optional %-style format args (stdlib compat)
**kwargs – Additional metadata for streaming event
Example
logger.status(“Creating execution plan…”) logger.status(“Processing batch 2/5”, batch=2, total=5)
- key_info(message, *args, **kwargs)[source]#
Important operational information - emits StatusEvent with info level.
User-facing output. Transport is automatic.
- Parameters:
message (str) – Info message
*args – Optional %-style format args (stdlib compat)
**kwargs – Additional metadata for streaming event
- info(message, *args, **kwargs)[source]#
Info message - emits StatusEvent with info level.
User-facing output. Transport is automatic.
- Parameters:
message (str) – Info message
*args – Optional %-style format args (stdlib compat)
**kwargs – Additional metadata for streaming event
Example
logger.info(“Active capabilities: […]”) logger.info(“Step completed”)
- debug(message, *args, **kwargs)[source]#
Debug message - emits StatusEvent with debug level.
User-facing output (filtered by client if not needed). Transport is automatic.
- Parameters:
message (str) – Debug message
*args – Optional %-style format args (stdlib compat)
**kwargs – Additional metadata for streaming event
- warning(message, *args, **kwargs)[source]#
Warning message - emits StatusEvent with warning level.
User-facing output. Transport is automatic.
- Parameters:
message (str) – Warning message
*args – Optional %-style format args (stdlib compat)
**kwargs – Additional metadata for streaming event
- error(message, *args, exc_info=False, **kwargs)[source]#
Error message - emits ErrorEvent.
User-facing output. Transport is automatic.
- Parameters:
message (str) – Error message
*args – Optional %-style format args (stdlib compat)
exc_info (bool) – Whether to include exception traceback in ErrorEvent
**kwargs – Additional error metadata for streaming event
- success(message, *args, **kwargs)[source]#
Success message - emits StatusEvent with success level.
User-facing output. Transport is automatic.
- Parameters:
message (str) – Success message
*args – Optional %-style format args (stdlib compat)
**kwargs – Additional metadata for streaming event
- timing(message, *args, **kwargs)[source]#
Timing information - emits StatusEvent with timing level.
User-facing output. Transport is automatic.
- Parameters:
message (str) – Timing message
*args – Optional %-style format args (stdlib compat)
**kwargs – Additional metadata for streaming event
- approval(message, *args, **kwargs)[source]#
Approval message - emits StatusEvent with approval level.
User-facing output. Transport is automatic.
- Parameters:
message (str) – Approval message
*args – Optional %-style format args (stdlib compat)
**kwargs – Additional metadata for streaming event
- resume(message, *args, **kwargs)[source]#
Resume message - emits StatusEvent with resume level.
User-facing output. Transport is automatic.
- Parameters:
message (str) – Resume message
*args – Optional %-style format args (stdlib compat)
**kwargs – Additional metadata for streaming event
- critical(message, *args, **kwargs)[source]#
Critical error - emits ErrorEvent.
User-facing output. Transport is automatic.
- Parameters:
message (str) – Critical error message
*args – Optional %-style format args (stdlib compat)
**kwargs – Additional error metadata for streaming event
- exception(message, *args, **kwargs)[source]#
Exception with traceback - emits ErrorEvent with stack trace.
User-facing output. Transport is automatic.
- Parameters:
message (str) – Exception message
*args – Optional %-style format args (stdlib compat)
**kwargs – Additional error metadata for streaming event
- property level: int#
- property name: str#
Legacy Streaming API (Deprecated)#
Deprecated since version 0.9.2: The separate streaming API is deprecated in favor of the unified logging system.
Use osprey.base.capability.BaseCapability.get_logger() in capabilities or
get_logger() with state parameter for automatic streaming support.
For backward compatibility only. New code should use the unified logging system above.
See also
- Orchestration Architecture
Development utilities integration patterns and configuration conventions