Skip to main content

Mistral

OpenClaw supports Mistral for both text/image model routing (mistral/...) and audio transcription via Voxtral in media understanding. Mistral can also be used for memory embeddings (memorySearch.provider = "mistral").

CLI setup

openclaw onboard --auth-choice mistral-api-key
# or non-interactive
openclaw onboard --mistral-api-key "$MISTRAL_API_KEY"

Config snippet (LLM provider)

{
  env: { MISTRAL_API_KEY: "sk-..." },
  agents: { defaults: { model: { primary: "mistral/mistral-large-latest" } } },
}

Built-in LLM catalog

OpenClaw currently ships this bundled Mistral catalog:
Model refInputContextMax outputNotes
mistral/mistral-large-latesttext, image262,14416,384Default model
mistral/mistral-medium-2508text, image262,1448,192Mistral Medium 3.1
mistral/mistral-small-latesttext, image128,00016,384Smaller multimodal model
mistral/pixtral-large-latesttext, image128,00032,768Pixtral
mistral/codestral-latesttext256,0004,096Coding
mistral/devstral-medium-latesttext262,14432,768Devstral 2
mistral/magistral-smalltext128,00040,000Reasoning-enabled

Config snippet (audio transcription with Voxtral)

{
  tools: {
    media: {
      audio: {
        enabled: true,
        models: [{ provider: "mistral", model: "voxtral-mini-latest" }],
      },
    },
  },
}

Notes

  • Mistral auth uses MISTRAL_API_KEY.
  • Provider base URL defaults to https://api.mistral.ai/v1.
  • Onboarding default model is mistral/mistral-large-latest.
  • Media-understanding default audio model for Mistral is voxtral-mini-latest.
  • Media transcription path uses /v1/audio/transcriptions.
  • Memory embeddings path uses /v1/embeddings (default model: mistral-embed).