Awee

Inference

Interact with AI language models.

Configure and call AI models from any workflow step. Supports OpenAI, Anthropic, Google, OpenRouter, and any OpenAI-compatible provider through the inference provider registry.

Build dynamic prompts with template expressions, request structured JSON output via schema definitions, estimate token costs before calling, and list available models at runtime. Every call is tracked by the accountant for per-run cost reporting.

Components

ComponentDescription
inferenceMakes LLM inference requests using the configured provider and its model. Supports tools, LLM parameter settings and cost calculation with user currency conversion. Pass a template path to the prompt field, or a string prompt to render dynamic prompts with variables.
inference:estimateEstimates the token count and cost for a prompt without running the model
inference:modelsLists all available models from the configured inference provider

How is this guide?

On this page