Crate gestura_core_llm

Crate gestura_core_llm 

Source
Expand description

Feature-gated LLM provider implementations and shared provider abstractions.

gestura-core-llm is the domain crate behind Gestura’s provider layer. It defines the common LlmProvider trait, shared response/token models, model listing helpers, default model catalogs, and the concrete provider implementations used by the runtime.

§Supported providers

Provider implementations are enabled with Cargo features and currently cover:

  • OpenAI
  • Anthropic
  • Grok (xAI)
  • Gemini
  • Ollama (local)

§Design role

This crate owns provider-specific HTTP behavior and response normalization. Higher-level concerns such as configuration-driven provider selection, runtime overrides, and pipeline orchestration remain in gestura-core.

The stable public import path for most consumers remains gestura_core::llm_provider::*.

§Shared abstractions

  • LlmProvider: async provider interface used by the runtime
  • LlmCallResponse: normalized response with text, usage, and tool calls
  • TokenUsage: provider-agnostic token accounting and estimated cost data
  • ToolCallInfo: normalized native function/tool call representation
  • default_models, model_listing, token_tracker: support modules for model defaults, discovery, and token accounting

§Native tool calling

Where providers support it, Gestura normalizes native function/tool calling into a common ToolCallInfo representation so the pipeline can process tool calls consistently across providers.

§Feature-gated workspace design

This crate is intentionally feature-gated so applications can compile only the providers they need. That keeps optional integrations isolated and makes the workspace easier to reason about in cargo doc and CI.

Modules§

default_models
Centralized default AI model constants for all providers.
model_capabilities
Dynamic model capabilities discovery and caching.
model_discovery
Dynamic model metadata discovery via provider APIs.
model_listing
Dynamic and static model listing for all LLM providers.
openai
OpenAI model capability and endpoint routing helpers.
token_tracker
Token usage tracking and cost estimation for Gestura

Structs§

AgentContext
Context hints for provider selection (agent, tenant, etc.)
AnthropicProvider
HTTP-based Anthropic Claude provider
GeminiProvider
HTTP-based Google Gemini provider (Generative Language API).
GrokProvider
HTTP-based Grok (xAI) provider (OpenAI-compatible endpoint)
LlmCallResponse
Response from an LLM call including token usage
OllamaProvider
HTTP-based Ollama local provider
OpenAiProvider
HTTP-based OpenAI completion provider
TokenUsage
Token usage information from an LLM API call
ToolCallInfo
A structured tool call returned by the LLM when using native function calling.
UnconfiguredProvider
A provider that returns an error when no real provider is configured. Used when config is missing or invalid.

Traits§

LlmProvider
Unified LLM interface (async)

Functions§

unconfigured_provider
Create an unconfigured provider that returns an error when called. Used when a provider is not properly configured.