Documentation
¶
Overview ¶
Package ollamaclient can be used for communicating with the Ollama service
Index ¶
- Variables
- func Base64EncodeFile(filePath string) (string, error)
- func ClearCache()
- func CloseCache()
- func InitCache() error
- func Massage(generatedOutput string) string
- type Config
- func (oc *Config) Complete(codeStart, codeEnd string) (string, error)
- func (oc *Config) ContinueChatResponse(messages []Message, promptAndOptionalImages ...string) (OutputResponse, error)
- func (oc *Config) CopyModel(source, destination string) error
- func (oc *Config) CreateModel(name, modelfile string) error
- func (oc *Config) DeleteModel(name string) error
- func (oc *Config) DescribeImages(imageFilenames []string, desiredWordCount int) (string, error)
- func (oc *Config) Embeddings(prompt string) ([]float64, error)
- func (oc *Config) GetBetweenResponse(prompt, suffix string) (OutputResponse, error)
- func (oc *Config) GetChatResponse(promptAndOptionalImages ...string) (OutputResponse, error)
- func (oc *Config) GetOutput(promptAndOptionalImages ...string) (string, error)
- func (oc *Config) GetOutputChatVision(promptAndOptionalImages ...string) (string, error)
- func (oc *Config) GetResponse(promptAndOptionalImages ...string) (OutputResponse, error)
- func (oc *Config) GetShowInfo() (ShowResponse, error)
- func (oc *Config) Has(model string) (bool, error)
- func (oc *Config) HasModel() (bool, error)
- func (oc *Config) List() ([]string, map[string]time.Time, map[string]int64, error)
- func (oc *Config) MustGetChatResponse(promptAndOptionalImages ...string) OutputResponse
- func (oc *Config) MustGetResponse(promptAndOptionalImages ...string) OutputResponse
- func (oc *Config) MustOutput(promptAndOptionalImages ...string) string
- func (oc *Config) Pull(optionalVerbose ...bool) (string, error)
- func (oc *Config) PullIfNeeded(optionalVerbose ...bool) error
- func (oc *Config) SetContextLength(contextLength int64)
- func (oc *Config) SetRandom()
- func (oc *Config) SetReproducible(optionalSeed ...int)
- func (oc *Config) SetSystemPrompt(prompt string)
- func (oc *Config) SetTool(tool Tool)
- func (oc *Config) SizeOf(model string) (int64, error)
- func (oc *Config) StreamBetween(callbackFunction func(string, bool), prompt, suffix string) error
- func (oc *Config) StreamOutput(callbackFunction func(string, bool), promptAndOptionalImages ...string) error
- func (oc *Config) Version() (string, error)
- type EmbeddingsRequest
- type EmbeddingsResponse
- type GenerateChatRequest
- type GenerateChatResponse
- type GenerateRequest
- type GenerateResponse
- type ListResponse
- type Message
- type MessageResponse
- type Model
- type OutputResponse
- type PullRequest
- type PullResponse
- type RequestOptions
- type ShowRequest
- type ShowResponse
- type Tool
- type ToolCall
- type ToolCallFunction
- type ToolFunction
- type ToolParameters
- type ToolProperty
- type VersionResponse
- type VisionRequest
Constants ¶
This section is empty.
Variables ¶
var Cache *bigcache.BigCache
Cache is used for caching reproducible results from Ollama (seed -1, temperature 0)
Functions ¶
func Base64EncodeFile ¶
Base64EncodeFile reads in a file and returns a base64-encoded string
Types ¶
type Config ¶
type Config struct { ServerAddr string ModelName string SeedOrNegative int TemperatureIfNegativeSeed float64 PullTimeout time.Duration HTTPTimeout time.Duration TrimSpace bool Verbose bool ContextLength int64 SystemPrompt string Tools []Tool }
Config represents configuration details for communicating with the Ollama API
func NewConfig ¶
func NewConfig(serverAddr, modelName string, seedOrNegative int, temperatureIfNegativeSeed float64, pTimeout, hTimeout time.Duration, trimSpace, verbose bool) *Config
NewConfig initializes a new Config using a specified model, address (like http://localhost:11434) and a verbose bool
func (*Config) Complete ¶
Complete is a convenience function for completing code between two given strings of code
func (*Config) ContinueChatResponse ¶
func (oc *Config) ContinueChatResponse(messages []Message, promptAndOptionalImages ...string) (OutputResponse, error)
ContinueeChatResponse sends a request to the Ollama API with previous messages and returns the generated response
func (*Config) CreateModel ¶
CreateModel creates a new model based on a Modelfile
func (*Config) DeleteModel ¶
DeleteModel removes a model from the server
func (*Config) DescribeImages ¶
DescribeImages can load a slice of image filenames into base64 encoded strings and build a prompt that starts with "Describe this/these image(s):" followed by the encoded images, and return a result. Typically used together with the "llava" model.
func (*Config) Embeddings ¶
Embeddings sends a request to get embeddings for a given prompt
func (*Config) GetBetweenResponse ¶
func (oc *Config) GetBetweenResponse(prompt, suffix string) (OutputResponse, error)
GetBetweenResponse is given the start of code and end of code and will try to complete what is in between This function will ignore oc.TrimSpace and not trim blanks.
func (*Config) GetChatResponse ¶
func (oc *Config) GetChatResponse(promptAndOptionalImages ...string) (OutputResponse, error)
GetChatResponse sends a request to the Ollama API and returns the generated response
func (*Config) GetOutput ¶
GetOutput sends a request to the Ollama API and returns the generated output string
func (*Config) GetOutputChatVision ¶
GetOutputChatVision sends a request to the Ollama API and returns the generated response. It is similar to GetChatResponse, but it adds the images into the Message struct before sending them.
func (*Config) GetResponse ¶
func (oc *Config) GetResponse(promptAndOptionalImages ...string) (OutputResponse, error)
GetResponse sends a request to the Ollama API and returns the generated response
func (*Config) GetShowInfo ¶
func (oc *Config) GetShowInfo() (ShowResponse, error)
GetShowInfo sends a request to the "show" API and returns the response with model information.
func (*Config) MustGetChatResponse ¶
func (oc *Config) MustGetChatResponse(promptAndOptionalImages ...string) OutputResponse
MustGetChatResponse returns the response from Ollama, or a response with an error if not
func (*Config) MustGetResponse ¶
func (oc *Config) MustGetResponse(promptAndOptionalImages ...string) OutputResponse
MustGetResponse returns the response from Ollama, or an error if not
func (*Config) MustOutput ¶
MustOutput returns the generated output string from Ollama, or the error as a string if not
func (*Config) PullIfNeeded ¶
PullIfNeeded pulls a model, but only if it's not already there. While Pull downloads/updates the model regardless. Also takes an optional bool for if progress bars should be used when models are being downloaded.
func (*Config) SetContextLength ¶
SetContextLength sets the context lenght for this Ollama config
func (*Config) SetRandom ¶
func (oc *Config) SetRandom()
SetRandom configures the generated output to not be reproducible
func (*Config) SetReproducible ¶
SetReproducible configures the generated output to be reproducible, with temperature 0 and a specific seed. It takes an optional random seed.
func (*Config) SetSystemPrompt ¶
SetSystemPrompt sets the system prompt for this Ollama config
func (*Config) SizeOf ¶
SizeOf returns the current size of the given model in bytes, or returns (-1, err) if it the model can't be found.
func (*Config) StreamBetween ¶
StreamBetween sends a request to the Ollama API and returns the generated output via a callback function. The callback function is given a string and "true" when the streaming is done (or if an error occurred).
func (*Config) StreamOutput ¶
func (oc *Config) StreamOutput(callbackFunction func(string, bool), promptAndOptionalImages ...string) error
StreamOutput sends a request to the Ollama API and returns the generated output via a callback function. The callback function is given a string and "true" when the streaming is done (or if an error occurred).
type EmbeddingsRequest ¶
EmbeddingsRequest represents the request payload for getting embeddings
type EmbeddingsResponse ¶
type EmbeddingsResponse struct {
Embeddings []float64 `json:"embedding"`
}
EmbeddingsResponse represents the response data containing embeddings
type GenerateChatRequest ¶
type GenerateChatRequest struct { Model string `json:"model"` Messages []Message `json:"messages,omitempty"` Images []string `json:"images,omitempty"` // base64 encoded images Stream bool `json:"stream"` Tools []Tool `json:"tools,omitempty"` Options RequestOptions `json:"options,omitempty"` Suffix string `json:"suffix,omitempty"` }
GenerateChatRequest represents the request payload for generating chat output
type GenerateChatResponse ¶
type GenerateChatResponse struct { Model string `json:"model"` CreatedAt string `json:"created_at"` Message MessageResponse `json:"message"` DoneReason string `json:"done_reason"` Done bool `json:"done"` TotalDuration int64 `json:"total_duration,omitempty"` LoadDuration int64 `json:"load_duration,omitempty"` PromptEvalCount int `json:"prompt_eval_count,omitempty"` PromptEvalDuration int64 `json:"prompt_eval_duration,omitempty"` EvalCount int `json:"eval_count,omitempty"` EvalDuration int64 `json:"eval_duration,omitempty"` }
GenerateChatResponse represents the response data from the generate chat API call
type GenerateRequest ¶
type GenerateRequest struct { Model string `json:"model"` System string `json:"system,omitempty"` Prompt string `json:"prompt,omitempty"` Images []string `json:"images,omitempty"` // base64 encoded images Stream bool `json:"stream,omitempty"` Options RequestOptions `json:"options,omitempty"` Suffix string `json:"suffix,omitempty"` }
GenerateRequest represents the request payload for generating output
type GenerateResponse ¶
type GenerateResponse struct { Model string `json:"model"` CreatedAt string `json:"created_at"` Response string `json:"response"` Context []int `json:"context,omitempty"` TotalDuration int64 `json:"total_duration,omitempty"` LoadDuration int64 `json:"load_duration,omitempty"` SampleCount int `json:"sample_count,omitempty"` SampleDuration int64 `json:"sample_duration,omitempty"` PromptEvalCount int `json:"prompt_eval_count,omitempty"` PromptEvalDuration int64 `json:"prompt_eval_duration,omitempty"` EvalCount int `json:"eval_count,omitempty"` EvalDuration int64 `json:"eval_duration,omitempty"` Done bool `json:"done"` }
GenerateResponse represents the response data from the generate API call
type ListResponse ¶
type ListResponse struct {
Models []Model `json:"models"`
}
ListResponse represents the response data from the tag API call
type Message ¶
type Message struct { Role string `json:"role"` Content string `json:"content"` Images []string `json:"images,omitempty"` // base64 encoded images (for vision models) }
Message is a chat message
type MessageResponse ¶
type MessageResponse struct { Role string `json:"role"` Content string `json:"content"` ToolCalls []ToolCall `json:"tool_calls"` }
MessageResponse represents the response data from the generate API call
type Model ¶
type Model struct { Modified time.Time `json:"modified_at"` Name string `json:"name"` Digest string `json:"digest"` Size int64 `json:"size"` }
Model represents a downloaded model
type OutputResponse ¶
type OutputResponse struct { Role string `json:"role"` Response string `json:"response"` ToolCalls []ToolCall `json:"tool_calls"` PromptTokens int `json:"prompt_tokens"` ResponseTokens int `json:"response_tokens"` Error string `json:"error"` }
OutputResponse represents the output from Ollama
type PullRequest ¶
type PullRequest struct { Name string `json:"name"` Insecure bool `json:"insecure,omitempty"` Stream bool `json:"stream,omitempty"` }
PullRequest represents the request payload for pulling a model
type PullResponse ¶
type PullResponse struct { Status string `json:"status"` Digest string `json:"digest"` Total int64 `json:"total"` Completed int64 `json:"completed"` }
PullResponse represents the response data from the pull API call
type RequestOptions ¶
type RequestOptions struct { Seed int `json:"seed"` Temperature float64 `json:"temperature"` ContextLength int64 `json:"num_ctx,omitempty"` }
RequestOptions holds the seed and temperature
type ShowRequest ¶
type ShowRequest struct {
Model string `json:"model"`
}
ShowRequest represents the structure of the request payload for the "show" API.
type ShowResponse ¶
type ShowResponse struct { License string `json:"license"` Modelfile string `json:"modelfile"` Parameters string `json:"parameters"` Template string `json:"template"` Details struct { ParentModel string `json:"parent_model"` Format string `json:"format"` Family string `json:"family"` Families []string `json:"families"` ParameterSize string `json:"parameter_size"` QuantizationLevel string `json:"quantization_level"` } `json:"details"` ModelInfo struct { GeneralArchitecture string `json:"general.architecture"` GeneralBasename string `json:"general.basename"` GeneralFileType int `json:"general.file_type"` GeneralFinetune string `json:"general.finetune"` GeneralLanguages []string `json:"general.languages"` GeneralLicense string `json:"general.license"` GeneralParameterCount int64 `json:"general.parameter_count"` GeneralQuantizationVersion int `json:"general.quantization_version"` GeneralSizeLabel string `json:"general.size_label"` GeneralTags []string `json:"general.tags"` GeneralType string `json:"general.type"` LlamaAttentionHeadCount int `json:"llama.attention.head_count"` LlamaAttentionHeadCountKv int `json:"llama.attention.head_count_kv"` LlamaAttentionLayerNormRmsEpsilon float64 `json:"llama.attention.layer_norm_rms_epsilon"` LlamaBlockCount int `json:"llama.block_count"` LlamaContextLength int `json:"llama.context_length"` LlamaEmbeddingLength int `json:"llama.embedding_length"` LlamaFeedForwardLength int `json:"llama.feed_forward_length"` LlamaRopeDimensionCount int `json:"llama.rope.dimension_count"` LlamaRopeFreqBase int `json:"llama.rope.freq_base"` LlamaVocabSize int `json:"llama.vocab_size"` TokenizerGgmlBosTokenID int `json:"tokenizer.ggml.bos_token_id"` TokenizerGgmlEosTokenID int `json:"tokenizer.ggml.eos_token_id"` TokenizerGgmlMerges any `json:"tokenizer.ggml.merges"` TokenizerGgmlModel string `json:"tokenizer.ggml.model"` TokenizerGgmlPre string `json:"tokenizer.ggml.pre"` TokenizerGgmlTokenType any `json:"tokenizer.ggml.token_type"` TokenizerGgmlTokens any `json:"tokenizer.ggml.tokens"` } `json:"model_info"` ModifiedAt string `json:"modified_at"` }
ShowResponse represents the structure of the response payload from the "show" API.
type Tool ¶
type Tool struct { Type string `json:"type"` Function ToolFunction `json:"function"` }
Tool represents a tool or function that can be used by the Ollama client
type ToolCall ¶
type ToolCall struct {
Function ToolCallFunction `json:"function"`
}
ToolCall represents a call to a tool function
type ToolCallFunction ¶
type ToolCallFunction struct { Name string `json:"name"` Arguments map[string]any `json:"arguments"` }
ToolCallFunction represents the function call details within a tool call
type ToolFunction ¶
type ToolFunction struct { Name string `json:"name"` Description string `json:"description"` Parameters ToolParameters `json:"parameters"` }
ToolFunction represents the function details within a tool
type ToolParameters ¶
type ToolParameters struct { Type string `json:"type"` Properties map[string]ToolProperty `json:"properties"` Required []string `json:"required"` }
ToolParameters represents the parameters of a tool
type ToolProperty ¶
type ToolProperty struct { Type string `json:"type"` Description string `json:"description"` Enum []string `json:"enum"` }
ToolProperty represents a property of a tool's parameter
type VersionResponse ¶
type VersionResponse struct {
Version string `json:"version"`
}
VersionResponse represents the response data containing the Ollama version