Expand description
Feature-gated LLM provider implementations and shared provider abstractions.
gestura-core-llm is the domain crate behind Gestura’s provider layer. It
defines the common LlmProvider trait, shared response/token models, model
listing helpers, default model catalogs, and the concrete provider
implementations used by the runtime.
§Supported providers
Provider implementations are enabled with Cargo features and currently cover:
- OpenAI
- Anthropic
- Grok (xAI)
- Gemini
- Ollama (local)
§Design role
This crate owns provider-specific HTTP behavior and response normalization.
Higher-level concerns such as configuration-driven provider selection,
runtime overrides, and pipeline orchestration remain in gestura-core.
The stable public import path for most consumers remains
gestura_core::llm_provider::*.
§Shared abstractions
LlmProvider: async provider interface used by the runtimeLlmCallResponse: normalized response with text, usage, and tool callsTokenUsage: provider-agnostic token accounting and estimated cost dataToolCallInfo: normalized native function/tool call representationdefault_models,model_listing,token_tracker: support modules for model defaults, discovery, and token accounting
§Native tool calling
Where providers support it, Gestura normalizes native function/tool calling
into a common ToolCallInfo representation so the pipeline can process tool
calls consistently across providers.
§Feature-gated workspace design
This crate is intentionally feature-gated so applications can compile only
the providers they need. That keeps optional integrations isolated and makes
the workspace easier to reason about in cargo doc and CI.
Modules§
- default_
models - Centralized default AI model constants for all providers.
- model_
capabilities - Dynamic model capabilities discovery and caching.
- model_
discovery - Dynamic model metadata discovery via provider APIs.
- model_
listing - Dynamic and static model listing for all LLM providers.
- openai
- OpenAI model capability and endpoint routing helpers.
- token_
tracker - Token usage tracking and cost estimation for Gestura
Structs§
- Agent
Context - Context hints for provider selection (agent, tenant, etc.)
- Anthropic
Provider - HTTP-based Anthropic Claude provider
- Gemini
Provider - HTTP-based Google Gemini provider (Generative Language API).
- Grok
Provider - HTTP-based Grok (xAI) provider (OpenAI-compatible endpoint)
- LlmCall
Response - Response from an LLM call including token usage
- Ollama
Provider - HTTP-based Ollama local provider
- Open
AiProvider - HTTP-based OpenAI completion provider
- Token
Usage - Token usage information from an LLM API call
- Tool
Call Info - A structured tool call returned by the LLM when using native function calling.
- Unconfigured
Provider - A provider that returns an error when no real provider is configured. Used when config is missing or invalid.
Traits§
- LlmProvider
- Unified LLM interface (async)
Functions§
- unconfigured_
provider - Create an unconfigured provider that returns an error when called. Used when a provider is not properly configured.