Skip to content

AI Providers

Magic Shell supports multiple AI providers, including OpenCode Zen, OpenRouter, and custom models for local or remote endpoints.

OpenCode Zen provides curated models optimized for coding tasks. It’s the default provider and offers several free models.

ModelDescription
big-pickleStealth model - experimental (default)
glm-4.7GLM 4.7 - good general purpose
minimax-m2.1MiniMax’s capable model
kimi-k2.5Moonshot’s latest model
ModelDescription
claude-sonnet-4-5Anthropic’s hybrid reasoning model
claude-opus-4-5Anthropic’s most capable model
claude-haiku-4-5Anthropic’s latest fast model
kimi-k2Moonshot’s Kimi K2
kimi-k2-thinkingKimi K2 with extended reasoning
qwen3-coderAlibaba’s massive coding model
glm-4.6Zhipu AI’s capable model
gemini-3-proGoogle’s high-end Gemini model
gemini-3-flashGoogle’s fast Gemini model
gpt-5.2OpenAI’s flagship GPT model
gpt-5.2-codexOpenAI’s coding-focused GPT model
  1. Visit opencode.ai/auth
  2. Sign up or log in
  3. Copy your API key
  4. Run msh --setup and paste it

OpenRouter provides access to models from many providers through a single API.

ModelDescription
mimo-v2-flashMiMo V2 Flash
deepseek-v3DeepSeek V3
ModelDescription
claude-sonnet-4.5Anthropic Claude Sonnet 4.5
claude-opus-4.5Anthropic Claude Opus 4.5
claude-haiku-4.5Anthropic’s fast and efficient model
deepseek-r1DeepSeek R1 reasoning model
glm-4.7Zhipu AI’s capable model
gemini-2.5-proGoogle’s Gemini 2.5 Pro (stable until June 2026)
gemini-2.5-flashGoogle’s fast Gemini 2.5 (stable until June 2026)
  1. Visit openrouter.ai/keys
  2. Create an account
  3. Generate a new API key
  4. Run msh --setup and select OpenRouter
Terminal window
# Switch to OpenRouter
msh --provider openrouter
# Switch back to OpenCode Zen
msh --provider opencode-zen

Press Ctrl+X S to open the provider switcher.

Terminal window
# List available models
msh --models
# Set default model
msh --model big-pickle

Press Ctrl+X M to open the model picker.

Use CaseRecommended Model
Quick commandsbig-pickle (free)
Complex git operationsclaude-sonnet-4.5
System administrationglm-4.7 or minimax-m2.1
Multi-step tasksclaude-opus-4.5 or kimi-k2-thinking
Learning/experimentingAny free model

Set your API keys via environment variables:

Terminal window
# OpenCode Zen
export OPENCODE_ZEN_API_KEY="your-key"
# OpenRouter
export OPENROUTER_API_KEY="your-key"

Keys stored in the system keychain (via msh --setup) take precedence over environment variables.

Magic Shell supports custom models for local or remote OpenAI-compatible endpoints. This is perfect for:

  • LM Studio - Run models locally on your machine
  • Ollama - Local model management
  • Self-hosted APIs - Your own OpenAI-compatible endpoints
  • Third-party services - Any OpenAI-compatible API
Terminal window
# Interactive setup wizard
msh --add-model

You’ll be prompted for:

  • Model ID: A unique identifier (e.g., my-local-llama)
  • Display name: Human-readable name (e.g., Local Llama 3.2)
  • API model ID: The model identifier sent to the API (e.g., llama-3.2-3b)
  • Base URL: The API endpoint (e.g., http://localhost:1234/v1)
  • API key: Optional, stored securely in your system keychain
  • Category: fast, smart, or reasoning
Terminal window
# List all custom models
msh --list-custom
# Set a custom model as default
msh --model my-local-llama
# Remove a custom model
msh --remove-model my-local-llama
Terminal window
# Start LM Studio and load a model
# Then add it to Magic Shell:
msh --add-model
# Model ID: llama-3.2-local
# Display name: Local Llama 3.2
# API model ID: llama-3.2-3b
# Base URL: http://localhost:1234/v1
# API key: (leave empty for LM Studio)
# Category: smart
# Use it
msh --model llama-3.2-local "find all large files"

Custom model API keys are securely stored in your system keychain, just like provider API keys.