Awee Engine
The AI-native workflow engine. Compose, automate, and orchestrate anything.
Awee Engine is a next-generation workflow automation engine built around a single idea: we turn tasks into reusable modular workflows, and every workflow can be used as a component in another workflow.
The engine compiles it into an execution plan, resolves all template expressions, variable dependencies, and runs each step in order — passing outputs from one step as inputs to the next.
- insert diagram *
Why a workflow engine?
Modern automations chain together many different systems: AI models, web APIs, databases, file processing, shell commands. Wiring these together manually means duplicated glue code, fragile scripts, and no visibility into what ran, what failed, or what it cost, or even no way to debug or revert changes.
Awee Skills engine handles that layer for you:
- Typed steps with validated inputs and documented outputs
- Template expressions thread data between steps with zero boilerplate
- Built-in retry and caching so flaky external calls don't break your pipeline
- Real-time AI cost tracking on every model call — priced in your currency, updated daily
- Structured step tracing — started, completed, cached, skipped, retried — with full metrics per run
How it works
Define your workflow in a YAML file. Name your steps, pick a component for each, and wire outputs to inputs using {{ step_name.field }} expressions.
Run the engine. It compiles your workflow into an immutable execution plan with hierarchical step IDs and validates all inputs before the first step runs.
Steps execute in order. Each step's outputs become available to every step that follows. Conditional branches, loops, retries, and caching are all declared in the same config.
Collect results. The engine returns a structured map of all step outputs, along with execution metrics, cost totals, and a full trace log.
A workflow in 30 lines
id: research-and-summarise
name: Research and Summarise
actions:
- name: search
component: search:web
vars:
query: "{{ topic }} latest developments 2025"
- name: fetch
component: crawler
vars:
url: "{{ search.results[0].url }}"
- name: summarise
component: inference
vars:
provider: anthropic
model: claude-opus-4-20250514
prompt: |
Summarise the following article in three bullet points.
Article: {{ fetch.output }}
cache:
for: 1h
- name: save
component: file:write
vars:
path: "./output/{{ topic | slug }}.md"
content: "{{ summarise.content }}"This workflow searches the web, crawls the top result, asks an AI model to summarise it, and writes the result to a file. The summary is cached for one hour — re-running with the same inputs skips the AI call entirely.
What the engine can do
AI Inference
Call OpenAI, Anthropic, Ollama, or OpenRouter from any step. Tool use, extended thinking, and streaming are built in. Every call is metered and costed automatically.
Real-time Cost Tracking
AI model pricing is fetched daily and converted to your local currency in real time. Know exactly what each run costs — before you run it with inference:estimate, and after.
Web Research
Search the web and crawl pages to markdown in a single step. Feed live results directly into an AI model, a file, or a database record.
File Automation
Read, write, copy, chunk, merge, and validate files — CSV, JSON, XML, or plain text. Process thousands of records with the each loop.
HTTP & APIs
Make any HTTP request with dynamic headers, bodies, and auth tokens assembled from prior step outputs. REST, webhooks, and internal APIs all work the same way.
Browser Automation
Scrape structured data from web pages using configurable extraction rules and browser automation steps. Works with extensions and headless providers.
Shell Execution
Run any shell command and pipe its stdout into the next step. Combine with AI inference for dynamic, self-modifying automation.
Data Platform
Full CRUD access to Awee's managed database layer — schemas, tables, fields, and records — all from within a workflow.
Event-Driven Composition
Emit named events at any point in a workflow to trigger side-effects, notify external systems, or chain workflows together without tight coupling.
50+ Built-in Components
Inference, web search, crawling, HTTP, file I/O, CSV/JSON/XML processing, databases, shell, browser scraping, events, and more — all wired and ready.
AI inference, built in
The engine treats AI inference as a first-class workflow step. Every call to a language model is:
- Configurable — provider, model, temperature, token limits, tool use, and extended thinking all set per step
- Dynamic — prompts are template expressions; they can include prior step outputs, file contents, or any workflow variable
- Metered — token counts (input, output, reasoning) are tracked on every call
- Priced in real time — model pricing tables are updated daily; costs are converted to your configured currency using live exchange rates
- Estimable — use
inference:estimatebefore calling to check cost against a budget and abort early if it's too high
- name: check_budget
component: inference:estimate
vars:
provider: anthropic
model: claude-opus-4-20250514
prompt: "{{ document }}"
- name: analyse
component: inference
if: "{{ check_budget.input_cost | lt 0.05 }}"
vars:
provider: anthropic
model: claude-opus-4-20250514
prompt: "Extract key insights from: {{ document }}"
else:
- name: abort
component: error
vars:
message: "Estimated cost {{ check_budget.input_cost | currency 'EUR' }} exceeds budget"Supported providers: OpenAI, Anthropic, Ollama (local), OpenRouter (100+ models). Switch providers by changing one field — everything else stays the same.
Execution metrics
Every workflow run produces a structured metrics report. For each step you get:
- Execution time
- Cache hit or miss
- Retry count
- Token usage (input, output, reasoning)
- Inference cost in your configured currency
- Step status (completed, skipped, cached, failed)
Metrics are available in-process via the Metrics() call and streamed in real time via the tracer — making it straightforward to build dashboards, cost alerts, or audit logs on top of the engine.
Template expressions
Every field in every step supports template expressions. The engine resolves them before the step runs.
# simple variable reference
url: "https://api.example.com/{{ user_id }}"
# chained step output
prompt: "Translate this text: {{ fetch.output }}"
# modifier chain
filename: "{{ title | slug | lower }}.md"
# conditional default
model: "{{ preferred_model | default 'claude-opus-4-20250514' }}"
# collection access
first_result: "{{ search.results | first }}"
# currency formatting
message: "Cost: {{ cost | currency 'EUR' }}"Built-in modifiers cover text transformation (slug, trim, lower, upper, replace, shorten, join, split), collection operations (first, last, find, filter, sort, map, wrap), type conversion (json, from, yaml), filesystem helpers (dirname, filename, ext), comparisons (eq, gt, gte, not), and currency formatting (currency).
Control flow
Any step can declare an if expression. If it evaluates to false the step is skipped and the optional else branch runs instead.
- name: check_cost
component: inference:estimate
vars:
prompt: "{{ long_document }}"
model: claude-opus-4-20250514
- name: summarise
component: inference
if: "{{ check_cost.input_cost | lt 0.10 }}"
vars:
prompt: "Summarise: {{ long_document }}"
else:
- name: warn
component: error
vars:
message: "Estimated cost {{ check_cost.input_cost }} exceeds budget"Use each to iterate over any list. Child steps run once per item with the current element available as each.output and the index as each.index.
- name: items
component: csv:chunk
vars:
path: "./data/records.csv"
size: 100
- name: loop
component: each
vars:
items: "{{ items.output }}"
actions:
- name: process
component: inference
vars:
prompt: "Process this record: {{ loop.output }}"Any step can be retried automatically on failure. Configure the number of retries and the delay between attempts.
- name: fetch
component: http
vars:
url: "https://unstable-api.example.com/data"
retry:
count: 3
delay: 2Cache any step's outputs by duration. On a cache hit the step is skipped entirely — the cached value is injected as if the step ran.
- name: search
component: search:web
vars:
query: "{{ topic }}"
cache:
for: 24hComposable workflows
Any workflow with an input_schema and outputs map can be registered as a named component and called from any other workflow — just like a built-in component. This enables reuse, versioning, and composition of complex automations from smaller, tested building blocks.
# inner workflow registered as "summarise:article"
id: summarise:article
input_schema:
url: string
outputs:
summary: "{{ summarise.content }}"
# outer workflow calls it as a step
actions:
- name: result
component: summarise:article
vars:
url: "{{ article_url }}"Components
The engine ships with 50+ built-in components across ten categories. See the Components reference for the full list with all options and outputs documented.
Components that manage schemas, tables, fields, and records (app:*) require an Awee Premium subscription.
How is this guide?