Documentation
¶
Index ¶
Constants ¶
This section is empty.
Variables ¶
var ( // ErrResultTruncated is returned when the OpenAI API returned a truncated // result. The reason for the truncation will be appended to the error // string. ErrResultTruncated = errors.New("result was truncated") // ErrNoResults is returned if the OpenAI API returned an empty result. This // should not generally happen. ErrNoResults = errors.New("no results return from API") // ErrUnsupportedBackend is returned if the provided backend name is // unknown. ErrUnsupportedBackend = errors.New("unsupported backend") // ErrUnsupportedModel is returned if the SetModel method is provided with // an unsupported model ErrUnsupportedModel = errors.New("unsupported model") // ErrUnexpectedStatus is returned when the OpenAI API returned a response // with an unexpected status code ErrUnexpectedStatus = errors.New("OpenAI returned unexpected response") // ErrRequestFailed is returned when the OpenAI API returned an error for // the request ErrRequestFailed = errors.New("request failed") )
Functions ¶
func ExtractCode ¶
ExtractCode receives the full output string from the OpenAI API and attempts to extract a code block from it. OpenAI code blocks are generally Markdown blocks surrounded by the ``` string on both sides. If successful, the code string will be returned together with a true value, otherwise an empty string is returned together with a false value.
Types ¶
type Backend ¶
type Backend interface { // ListModels returns a list of all models supported by the backend. ListModels() []Model // DefaultModel returns the default model that should be used in the abscence // of a specific choice by the user. DefaultModel() Model // Complete send a prompt to a completion model. Complete(context.Context, Model, string) (Response, error) // Chat initiates a conversation with a chat model. Chat(Model) Conversation }
Backend is an interface that must be implemented in order to support an LLM provider.
type Conversation ¶
type Conversation interface { // Send sends a message to the model and returns the response. Send(context.Context, string, ...Message) (Response, error) }
Conversation is an interface that must be implemented in order to support chat models in an LLM provider.
type Message ¶
type Message struct { // Role is the type of the participant. The user is named "user" (in Amazon // Bedrock, this is equivalent to the "Human" identifier). Anything else is // considered the AI model. Role string `json:"role"` // Content is the text content of the message. Content string `json:"content"` }
Message represents a single message in an exchange between a user and an AI model, either as part of a chat or a single completion request.
type Response ¶
type Response struct { // FullOutput is the complete output returned by the API. This is generally // a Markdown-formatted Message that contains the generated code, plus // explanations, if any. FullOutput string // Code is the extracted code section from the complete output. If code was // not found or extraction otherwise failed, this will be the same as // FullOutput. Code string // APIKeyUsed is the API key used when making the request. APIKeyUsed string // TokensUsed is the number of tokens utilized by the request. This is // the "usage.total_tokens" value returned from the API. TokensUsed int64 }
Response is the struct returned from methods generating code via the OpenAI API.