Documentation
¶
Index ¶
Constants ¶
This section is empty.
Variables ¶
This section is empty.
Functions ¶
This section is empty.
Types ¶
type LLM ¶
type LLM interface {
Name() string
Generate(context.Context, *Request) iter.Seq2[*Response, error]
}
LLM is the model abstraction used by the kernel.
type Message ¶
type Message struct {
Role Role
Text string
Reasoning string
ToolCalls []ToolCall
ToolResponse *ToolResponse
}
Message is a single turn element in model context.
type ReasoningConfig ¶
type ReasoningConfig struct {
// Enabled toggles reasoning mode when supported by provider.
Enabled *bool
// BudgetTokens limits provider thinking tokens when supported.
BudgetTokens int
// Effort is provider-specific reasoning effort hint, e.g. low|medium|high.
Effort string
}
ReasoningConfig controls provider reasoning/thinking behavior.
type Request ¶
type Request struct {
Messages []Message
Tools []ToolDefinition
Stream bool
Reasoning ReasoningConfig
}
Request is a provider-agnostic model request.
type Response ¶
type Response struct {
Message Message
Partial bool
TurnComplete bool
Usage Usage
Model string
Provider string
}
Response is a provider-agnostic model response chunk.
type ToolCall ¶
type ToolCall struct {
ID string
Name string
Args map[string]any
// ThoughtSignature carries provider-specific chain-of-thought signature
// required by some providers (for example Gemini) to validate tool loops.
ThoughtSignature string
}
ToolCall is a model-emitted tool invocation request.
type ToolDefinition ¶
ToolDefinition describes a callable tool for model planning.
type ToolResponse ¶
ToolResponse is a tool execution result returned to model context.
Click to show internal directories.
Click to hide internal directories.