Listen to NavamAI Podcast
Privacy Controls¶
You decide which model and provider you trust, or even choose to run an LLM locally on your laptop. You are in control of how private your data and preferences remain. NavamAI supports state of the art models from Anthropic, OpenAI, Google, and Meta. You can choose a hosted provider or Ollama as a local model provider on your laptop. Switch between models and providers using a simple command like navamai config ask model llama
to switch from the current model.
You can also load custom model config sets mapped to each command. Configure these in navamai.yml
file. Here is an example of constraining how navamai ask
and navamai intents
commands behave differently using local and hosted model providers.
ask:
provider: ollama
model: mistral
max-tokens: 300
save: false
system: Be crisp in your response. Only respond to the prompt
using valid markdown syntax. Do not explain your response.
temperature: 0.3
intents:
provider: claude
model: sonnet
max-tokens: 1000
save: true
folder: Embeds
system: Only respond to the prompt using valid markdown syntax.
When responding with markdown headings start at level 2.
Do not explain your response.
temperature: 0.0