Built-in Capabilities#
Added in version 0.11.
Since v0.11, eight capabilities are provided natively by the framework in osprey.capabilities. They are automatically registered when your application starts — no code generation or template files required. Your application only needs prompt customizations and data.
This page is a concise reference for each capability. For a hands-on walkthrough of how they work together in a real session, see the Control Assistant Tutorial.
Control System Capabilities#
These four capabilities form the control-system interaction layer, handling the full lifecycle from channel discovery through live reads, writes, and historical retrieval.
channel_finding#
Resolves natural language descriptions to control system channel addresses using configurable search pipelines.
Context flow:
Provides:
CHANNEL_ADDRESSES— list of matched channel address strings with the original search queryRequires: (none) — extracts the search query from the task objective
Configuration:
channel_finder:
pipeline_mode: hierarchical # in_context | hierarchical | middle_layer
pipelines:
hierarchical:
database:
type: hierarchical
path: src/.../channel_databases/hierarchical.json
Error handling: ChannelNotFoundError triggers replanning; ChannelFinderServiceError is critical.
To view or customize: osprey eject capability channel_finding
Context class reference: ChannelAddressesContext
- class osprey.capabilities.channel_finding.ChannelAddressesContext(*, channels, original_query)[source]
Bases:
CapabilityContextFramework context for channel finding capability results.
This is the rich context object used throughout the framework for channel address data. Based on ALS Assistant’s ChannelAddresses pattern.
Create a new model by parsing and validating input data from keyword arguments.
Raises [ValidationError][pydantic_core.ValidationError] if the input data cannot be validated to form a valid model.
self is explicitly positional-only to allow self as a field name.
- CONTEXT_TYPE: ClassVar[str] = 'CHANNEL_ADDRESSES'
- CONTEXT_CATEGORY: ClassVar[str] = 'METADATA'
- channels: list[str]
- original_query: str
- get_access_details(key)[source]
Rich description for LLM consumption.
- Return type:
dict[str, Any]
- get_summary()[source]
FOR HUMAN DISPLAY: Create readable summary for UI/debugging. Always customize for better user experience.
- Return type:
dict[str, Any]
- model_config = {'arbitrary_types_allowed': False, 'json_encoders': {<class 'datetime.datetime'>: <function CapabilityContext.<lambda>>}, 'populate_by_name': True, 'use_enum_values': True, 'validate_by_alias': True, 'validate_by_name': True}
Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].
See also
Part 2: Building Your Channel Finder — pipeline modes, database building, benchmarks
Prompt Customization — facility-specific prompt overrides for channel finder
channel_read#
Reads current live values from control system channels using the configured connector (mock, EPICS, Tango, or LabVIEW).
Context flow:
Provides:
CHANNEL_VALUES— dictionary mapping channel addresses toChannelValue(value, timestamp, units)Requires:
CHANNEL_ADDRESSES— typically from a precedingchannel_findingstep
Configuration:
control_system:
type: mock # mock | epics | tango | labview
connector:
epics:
timeout: 5.0
Error handling: Timeout and access errors are retriable (up to 3 attempts with exponential backoff). Missing dependencies trigger replanning.
To view or customize: osprey eject capability channel_read
Context class reference: ChannelValuesContext
- class osprey.capabilities.channel_read.ChannelValuesContext(*, channel_values)[source]
Bases:
CapabilityContextResult from channel value retrieval operation and context for downstream capabilities. Based on ALS Assistant’s ChannelValues pattern.
Create a new model by parsing and validating input data from keyword arguments.
Raises [ValidationError][pydantic_core.ValidationError] if the input data cannot be validated to form a valid model.
self is explicitly positional-only to allow self as a field name.
- CONTEXT_TYPE: ClassVar[str] = 'CHANNEL_VALUES'
- CONTEXT_CATEGORY: ClassVar[str] = 'COMPUTATIONAL_DATA'
- channel_values: dict[str, ChannelValue]
- property channel_count: int
Number of channels retrieved.
- get_access_details(key)[source]
Rich description for LLM consumption.
- Return type:
dict[str, Any]
- get_summary()[source]
FOR HUMAN DISPLAY: Create readable summary for UI/debugging. Always customize for better user experience.
- Return type:
dict[str, Any]
- model_config = {'arbitrary_types_allowed': False, 'json_encoders': {<class 'datetime.datetime'>: <function CapabilityContext.<lambda>>}, 'populate_by_name': True, 'use_enum_values': True, 'validate_by_alias': True, 'validate_by_name': True}
Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].
- class osprey.capabilities.channel_read.ChannelValue(*, value, timestamp, units)[source]
Bases:
BaseModelIndividual channel value data - simple nested structure for Pydantic.
Create a new model by parsing and validating input data from keyword arguments.
Raises [ValidationError][pydantic_core.ValidationError] if the input data cannot be validated to form a valid model.
self is explicitly positional-only to allow self as a field name.
- value: str
- timestamp: datetime
- units: str
- model_config = {}
Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].
See also
Part 3: Integration & Deployment — live read examples in the tutorial walkthrough
Control System Integration — connector architecture and custom connectors
channel_write#
Writes values to control system channels with four mandatory safety layers.
Safety layers (enforced in order):
Master switch —
control_system.writes_enabledmust betrueHuman approval — LangGraph interrupt for operator confirmation
Limits checking — min/max/step/writable constraints from limits database
Write verification — callback or readback confirmation after write
Context flow:
Provides:
CHANNEL_WRITE_RESULTS— per-channel success/failure with optional verification detailsRequires:
CHANNEL_ADDRESSES— plus optionallyPYTHON_RESULTS,ARCHIVER_DATA, or any other context containing the value to write
Configuration:
control_system:
writes_enabled: false # Master safety switch
approval:
capabilities:
python_execution:
enabled: true
mode: control_writes # Approval for writes
Error handling: Access errors and read-only violations are critical. Write parsing failures are retriable. Ambiguous operations trigger replanning.
To view or customize: osprey eject capability channel_write
Context class reference: ChannelWriteResultsContext
- class osprey.capabilities.channel_write.ChannelWriteResultsContext(*, results, total_writes, successful_count, failed_count)[source]
Bases:
CapabilityContextResult from channel write operation and context for downstream capabilities. Provides detailed results for each write operation including success/failure status.
Create a new model by parsing and validating input data from keyword arguments.
Raises [ValidationError][pydantic_core.ValidationError] if the input data cannot be validated to form a valid model.
self is explicitly positional-only to allow self as a field name.
- CONTEXT_TYPE: ClassVar[str] = 'CHANNEL_WRITE_RESULTS'
- CONTEXT_CATEGORY: ClassVar[str] = 'COMPUTATIONAL_DATA'
- results: list[ChannelWriteResult]
- total_writes: int
- successful_count: int
- failed_count: int
- get_access_details(key)[source]
Rich description for LLM consumption.
- Return type:
dict[str, Any]
- get_summary()[source]
FOR HUMAN DISPLAY: Create readable summary for UI/debugging. Always customize for better user experience.
- Return type:
dict[str, Any]
- model_config = {'arbitrary_types_allowed': False, 'json_encoders': {<class 'datetime.datetime'>: <function CapabilityContext.<lambda>>}, 'populate_by_name': True, 'use_enum_values': True, 'validate_by_alias': True, 'validate_by_name': True}
Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].
- class osprey.capabilities.channel_write.ChannelWriteResult(*, channel_address, value_written, success, verification=None, error_message=None)[source]
Bases:
BaseModelIndividual channel write result data with optional verification.
This is the high-level result model used in capabilities and context classes. Provides detailed information about write success and verification status.
Create a new model by parsing and validating input data from keyword arguments.
Raises [ValidationError][pydantic_core.ValidationError] if the input data cannot be validated to form a valid model.
self is explicitly positional-only to allow self as a field name.
- channel_address: str
- value_written: Any
- success: bool
- verification: WriteVerificationInfo | None
- error_message: str | None
- model_config = {}
Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].
- class osprey.capabilities.channel_write.WriteVerificationInfo(*, level, verified, readback_value=None, tolerance_used=None, notes=None)[source]
Bases:
BaseModelVerification information for a channel write operation.
Create a new model by parsing and validating input data from keyword arguments.
Raises [ValidationError][pydantic_core.ValidationError] if the input data cannot be validated to form a valid model.
self is explicitly positional-only to allow self as a field name.
- level: str
- verified: bool
- readback_value: float | None
- tolerance_used: float | None
- notes: str | None
- model_config = {}
Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].
See also
Human Approval — approval interrupt mechanics
Part 3: Integration & Deployment — write workflow walkthrough
archiver_retrieval#
Retrieves historical time-series data from the facility archiver for analysis and visualization.
Context flow:
Provides:
ARCHIVER_DATA— timestamps, time-series values per channel, precision, and available channel listRequires:
CHANNEL_ADDRESSES+TIME_RANGE(single) — from precedingchannel_findingandtime_range_parsingsteps
Configuration:
archiver:
type: mock_archiver # mock_archiver | epics_archiver
epics_archiver:
url: https://archiver.facility.edu:8443
timeout: 60
Common downstream patterns:
archiver_retrieval→python(create plot) → respondarchiver_retrieval→python(calculate statistics) → respond
Error handling: Timeouts are retriable. Connection errors are critical. Data format issues and missing dependencies trigger replanning.
To view or customize: osprey eject capability archiver_retrieval
Context class reference: ArchiverDataContext
- class osprey.capabilities.archiver_retrieval.ArchiverDataContext(*, timestamps, precision_ms, time_series_data, available_channels, timezone_name='')[source]
Bases:
CapabilityContextStructured context for archiver data capability results.
This stores archiver data with datetime objects for full datetime functionality and consistency. Based on ALS Assistant’s ArchiverDataContext pattern with downsampling support.
Create a new model by parsing and validating input data from keyword arguments.
Raises [ValidationError][pydantic_core.ValidationError] if the input data cannot be validated to form a valid model.
self is explicitly positional-only to allow self as a field name.
- CONTEXT_TYPE: ClassVar[str] = 'ARCHIVER_DATA'
- CONTEXT_CATEGORY: ClassVar[str] = 'COMPUTATIONAL_DATA'
- timestamps: list[datetime]
- precision_ms: int
- time_series_data: dict[str, list[float]]
- available_channels: list[str]
- timezone_name: str
- get_access_details(key)[source]
Rich description of the archiver data structure.
- Return type:
dict[str, Any]
- get_summary()[source]
FOR HUMAN DISPLAY: Format data for response generation. Downsamples large datasets to prevent context window overflow.
- Return type:
dict[str, Any]
- model_config = {'arbitrary_types_allowed': False, 'json_encoders': {<class 'datetime.datetime'>: <function CapabilityContext.<lambda>>}, 'populate_by_name': True, 'use_enum_values': True, 'validate_by_alias': True, 'validate_by_name': True}
Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].
See also
Part 3: Integration & Deployment — archiver + plotting workflow
Control System Integration — archiver connector configuration
Analysis & Execution Capabilities#
These two capabilities provide computational analysis and temporal reasoning.
python#
Generates and executes Python code through the Python executor service. Acts as a gateway between the agent graph and the sandboxed execution environment.
Context flow:
Provides:
PYTHON_RESULTS— generated code, stdout/stderr, computed results (results.json), execution time, figure paths, and notebook linkRequires: (none) — but commonly receives
ARCHIVER_DATA,CHANNEL_VALUES, or other contexts as inputs for analysis
Configuration:
services:
jupyter:
path: ./services/jupyter
containers:
read:
name: jupyter-read
port_host: 8088
approval:
capabilities:
python_execution:
enabled: true
mode: control_writes # disabled | all_code | control_writes
Error handling: All service errors are retriable (the service handles retries internally with up to 3 attempts).
To view or customize: osprey eject capability python
Context class reference: PythonResultsContext
See also
Python Execution — service architecture, code generation, and security
Prompt Customization — Python prompt builder for domain-specific instructions
time_range_parsing#
Converts natural language time expressions into precise datetime ranges using LLM-based analysis. Supports relative periods (“last 24 hours”), named periods (“yesterday”), and absolute date references.
Context flow:
Provides:
TIME_RANGE—start_dateandend_dateas Pythondatetimeobjects with full arithmetic and formatting supportRequires: (none) — extracts temporal expressions from the task objective
Configuration:
The capability uses the model configured under the time_parsing key in models:. No additional configuration required.
Error handling: Invalid formats are retriable (LLM may parse correctly on retry). Ambiguous time references trigger replanning to request user clarification.
To view or customize: osprey eject capability time_range_parsing
Context class reference: TimeRangeContext
- class osprey.capabilities.time_range_parsing.TimeRangeContext(*, start_date, end_date, timezone_name='')[source]
Bases:
CapabilityContextStructured context for time range parsing results with datetime objects.
Provides comprehensive context for parsed time ranges using native Python datetime objects for maximum functionality and type safety. The context enables other capabilities to perform sophisticated temporal operations including arithmetic, comparisons, and formatting without additional parsing.
The context maintains both start and end datetime objects with full timezone information, enabling precise temporal calculations and consistent behavior across different system environments.
- Parameters:
start_date (datetime) – Parsed start datetime with timezone information (local timezone)
end_date (datetime) – Parsed end datetime with timezone information (local timezone)
Note
Datetime objects are timezone-aware and provide full functionality including arithmetic operations (end_date - start_date), comparisons, and flexible formatting options. Use the to_timezone() method to convert to other timezones for display.
Warning
The context validates that start_date < end_date during initialization to ensure logical time range consistency.
See also
osprey.context.base.CapabilityContext: Base context functionalityTimeRangeParsingCapability.execute(): Main capability that creates this contextTimeRangeOutput: Pydantic model that provides data for this contextCreate a new model by parsing and validating input data from keyword arguments.
Raises [ValidationError][pydantic_core.ValidationError] if the input data cannot be validated to form a valid model.
self is explicitly positional-only to allow self as a field name.
- start_date: datetime
- end_date: datetime
- timezone_name: str
- derive_timezone_name()[source]
Derive timezone_name from start_date after validate_datetime has run.
- Return type:
TimeRangeContext
- CONTEXT_TYPE: ClassVar[str] = 'TIME_RANGE'
- CONTEXT_CATEGORY: ClassVar[str] = 'METADATA'
- property context_type: str
Return the context type identifier
- get_access_details(key)[source]
Provide comprehensive access information for time range context integration.
Generates detailed access information for other capabilities to understand how to interact with parsed time range data. Includes access patterns, datetime functionality descriptions, and practical usage examples for leveraging the full power of datetime objects.
- Parameters:
key_name (Optional[str]) – Optional context key name for access pattern generation
- Returns:
Dictionary containing comprehensive access details and datetime usage examples
- Return type:
Dict[str, Any]
Note
Emphasizes the full datetime functionality available including arithmetic, comparison operations, and flexible formatting capabilities.
Important
All datetime objects are timezone-aware. If you need to display times in a different timezone, use .astimezone() to convert: local_time = utc_time.astimezone(ZoneInfo(“America/Los_Angeles”))
- get_summary()[source]
Generate summary for UI display and debugging.
Creates a formatted summary of the parsed time range suitable for display in user interfaces, debugging output, and development tools. Uses human-friendly formatting while maintaining precision.
- Parameters:
key_name (Optional[str]) – Optional context key name for reference
- Returns:
Dictionary containing time range summary
- Return type:
Dict[str, Any]
Note
Uses standardized datetime formatting for consistency across the framework while providing duration calculations for context.
- classmethod validate_datetime(v)[source]
Validate and convert datetime inputs with comprehensive format support.
Provides robust datetime validation that accepts multiple input formats including ISO strings with timezone information, standard datetime strings, and native datetime objects. Handles timezone conversion and normalization for consistent behavior.
- Parameters:
v (Union[str, datetime]) – Input value to validate and convert to datetime
- Returns:
Validated datetime object with proper timezone information
- Return type:
datetime
- Raises:
ValueError – If input cannot be parsed as a valid datetime
Note
Supports ISO format strings with and without timezone information, automatically handling UTC conversion and local timezone assumptions.
- model_config = {'arbitrary_types_allowed': False, 'json_encoders': {<class 'datetime.datetime'>: <function CapabilityContext.<lambda>>}, 'populate_by_name': True, 'use_enum_values': True, 'validate_by_alias': True, 'validate_by_name': True}
Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].
See also
Part 3: Integration & Deployment — time parsing in the archiver retrieval workflow
Prompt Customization — time range parsing prompt builder
Knowledge & Retrieval Capabilities#
These two capabilities provide persistent memory and historical search over facility logbooks.
memory#
Prototype Implementation
The memory capability is a minimal working implementation — a first step toward a full memory system. It provides basic save/retrieve operations with file-based storage. A production-grade graph-based memory system is planned.
Saves and retrieves user information across conversations. Uses LLM-based analysis to extract memory-worthy content from chat history and integrates with the approval system for controlled modifications.
Context flow:
Provides:
MEMORY_CONTEXT— operation type (save/retrieve), result message, and memory dataRequires: (none)
Supported operations:
Save — extracts content from chat history using LLM analysis, optionally requests approval, then stores persistently
Retrieve — fetches all stored memory entries for the current user
Configuration:
approval:
capabilities:
memory:
enabled: true # Require approval for memory saves
session:
user_id: operator_1 # Required for memory operations
Error handling: Missing user ID is critical. Content extraction failures trigger replanning. Storage and retrieval errors are retriable.
To view or customize: osprey eject capability memory
Context class reference: MemoryContext
- class osprey.capabilities.memory.MemoryContext(*, memory_data, operation_type, operation_result=None)[source]
Bases:
CapabilityContextFramework memory context for storing and retrieving user memory data.
Provides structured context for memory operations including save and retrieve operations. This context integrates with the execution context system to provide memory data access to other capabilities that need user context information.
The context maintains operation metadata and results, allowing capabilities to understand both what memory operation was performed and access the resulting data. This enables sophisticated workflows where capabilities can build upon previously stored or retrieved user information.
- Parameters:
memory_data (Dict[str, Any]) – Dictionary containing memory operation data and results
operation_type (str) – Type of memory operation performed (‘store’, ‘retrieve’, ‘search’)
operation_result (Optional[str]) – Human-readable result message from the operation
Note
The memory_data structure varies based on operation_type: - ‘store’: Contains saved_content and timestamp - ‘retrieve’: Contains memories list with all stored entries - ‘search’: Contains filtered results based on search criteria
See also
osprey.context.base.CapabilityContext: Base context functionalityosprey.services.memory_storage.MemoryContent: Memory entry structureMemoryOperationsCapability.execute(): Main capability that creates this context_perform_memory_save_operation(): Save operation that produces this context_perform_memory_retrieve_operation(): Retrieve operation that produces this contextCreate a new model by parsing and validating input data from keyword arguments.
Raises [ValidationError][pydantic_core.ValidationError] if the input data cannot be validated to form a valid model.
self is explicitly positional-only to allow self as a field name.
- CONTEXT_TYPE: ClassVar[str] = 'MEMORY_CONTEXT'
- CONTEXT_CATEGORY: ClassVar[str] = 'CONTEXTUAL_KNOWLEDGE'
- memory_data: dict[str, Any]
- operation_type: str
- operation_result: str | None
- get_access_details(key)[source]
Provide detailed access information for capability context integration.
Generates comprehensive access details for other capabilities to understand how to interact with this memory context data. Includes access patterns, example usage, and data structure descriptions.
- Parameters:
key_name (Optional[str]) – Optional context key name for access pattern generation
- Returns:
Dictionary containing access details and usage examples
- Return type:
Dict[str, Any]
Note
This method is called by the framework’s context management system to provide integration guidance for other capabilities.
- get_summary()[source]
Generate summary for response generation and UI display.
Creates a formatted summary of the memory operation results suitable for display in user interfaces and inclusion in agent responses. Returns raw data structures for robust LLM processing rather than pre-formatted strings.
- Parameters:
key_name (Optional[str]) – Optional context key name for reference
- Returns:
Dictionary containing memory operation summary
- Return type:
Dict[str, Any]
Note
This method returns structured data rather than formatted strings to enable robust LLM processing and response generation.
- model_config = {'arbitrary_types_allowed': False, 'json_encoders': {<class 'datetime.datetime'>: <function CapabilityContext.<lambda>>}, 'populate_by_name': True, 'use_enum_values': True, 'validate_by_alias': True, 'validate_by_name': True}
Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].
See also
Memory Storage — storage backend and file format
Prompt Customization — memory extraction prompt builder
logbook_search#
Searches facility logbooks using ARIEL’s agentic retrieval system. Handles natural language queries about facility history, equipment incidents, and operational knowledge.
Context flow:
Provides:
LOGBOOK_SEARCH_RESULTS— matched entries, RAG-generated answer, source citations, and search metadataRequires: (none) —
TIME_RANGEis optional via soft constraint
Configuration:
ariel:
database:
uri: postgresql://ariel:ariel@localhost:5432/ariel
search_modules:
keyword:
enabled: true
semantic:
enabled: false
provider: ollama
model: nomic-embed-text
pipelines:
rag:
enabled: true
retrieval_modules: [keyword]
agent:
enabled: true
retrieval_modules: [keyword]
reasoning:
provider: cborg
model_id: anthropic/claude-haiku
Error handling:
DatabaseConnectionError→ critical (with setup guidance)DatabaseQueryError→ retriable for transient errors, critical for missing tablesEmbeddingGenerationError→ criticalConfigurationError→ criticalUnknown exceptions → critical
To view or customize: osprey eject capability logbook_search
Context class reference: LogbookSearchResultsContext
- class osprey.capabilities.logbook_search.LogbookSearchResultsContext(*, entries, answer, sources, search_modes_used, query, time_range_applied)[source]
Bases:
CapabilityContextSearch results from ARIEL logbook search.
Provides structured context for logbook search results including matched entries, RAG-generated answers, and search metadata for downstream capabilities and response generation.
- entries
Matching entries, ranked by relevance.
- Type:
tuple[dict, …]
- answer
RAG-generated answer (if RAG was used).
- Type:
str | None
- sources
Entry IDs cited in answer.
- Type:
tuple[str, …]
- search_modes_used
Search modes that were invoked (e.g., [“semantic”, “rag”]).
- Type:
tuple[str, …]
- query
Original query text.
- Type:
str
- time_range_applied
Whether time filter was used.
- Type:
bool
Create a new model by parsing and validating input data from keyword arguments.
Raises [ValidationError][pydantic_core.ValidationError] if the input data cannot be validated to form a valid model.
self is explicitly positional-only to allow self as a field name.
- CONTEXT_TYPE: ClassVar[str] = 'LOGBOOK_SEARCH_RESULTS'
- CONTEXT_CATEGORY: ClassVar[str] = 'DATA'
- entries: tuple[dict, ...]
- answer: str | None
- sources: tuple[str, ...]
- search_modes_used: tuple[str, ...]
- query: str
- time_range_applied: bool
- property context_type: str
Return context type identifier.
- get_access_details(key)[source]
Provide access information for other capabilities.
- Return type:
dict[str, Any]
- get_summary()[source]
Generate summary for response generation including actual content.
Returns both metadata and actual search results/answers to enable the RespondCapability to generate meaningful user responses. Follows the pattern established by PythonResultContext.
- Return type:
dict[str, Any]
- model_config = {'arbitrary_types_allowed': False, 'json_encoders': {<class 'datetime.datetime'>: <function CapabilityContext.<lambda>>}, 'populate_by_name': True, 'use_enum_values': True, 'validate_by_alias': True, 'validate_by_name': True}
Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].
See also
Logbook Search Service (ARIEL) — service architecture and search modes
Prompt Customization — logbook search prompt builder
Customization#
All built-in capabilities support customization through two mechanisms:
Prompt overrides — Place prompt builder files in your project’s framework_prompts/ directory to customize orchestrator guides, classifier examples, and domain-specific instructions without modifying capability code. See Prompt Customization.
Eject for full control — Use osprey eject <capability> to copy a capability’s source into your project for complete customization. The ejected copy takes precedence over the framework version. See osprey eject for details.
See also
Part 4: Customization & Extension — customization tutorial with practical examples